0% found this document useful (0 votes)
435 views271 pages

Sixsigma Ultimate

The document traces the evolution of Six Sigma from its origins in statistical analysis in the 1920s to its development at Motorola in the 1980s. It describes key phases and concepts in Six Sigma's evolution, including the development of statistical process control, quality circles, total quality management, ISO 9000 standards, and the Baldrige criteria. Six Sigma aims to reduce defects through its DMAIC methodology and focus on customer requirements, processes, and sustainability. It has been widely adopted globally across industries as a way to improve quality, efficiency, and competitiveness.

Uploaded by

Cosmetic Clinic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
435 views271 pages

Sixsigma Ultimate

The document traces the evolution of Six Sigma from its origins in statistical analysis in the 1920s to its development at Motorola in the 1980s. It describes key phases and concepts in Six Sigma's evolution, including the development of statistical process control, quality circles, total quality management, ISO 9000 standards, and the Baldrige criteria. Six Sigma aims to reduce defects through its DMAIC methodology and focus on customer requirements, processes, and sustainability. It has been widely adopted globally across industries as a way to improve quality, efficiency, and competitiveness.

Uploaded by

Cosmetic Clinic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as ODT, PDF, TXT or read online on Scribd
You are on page 1/ 271

The first person to define Sigma as a statistical measure was mathematician ‘Carl Friedrich Gauss’.

Gauss
described the standard deviation from a central value such as a ‘mean’. Back in 1920s the first use of Sigma as a
metric or standard for evaluating quality was traced. Thereafter, the concept of Six Sigma was developed by
Motorola in 1980s. Since, Motorola was experiencing significant quality problems and was struggling to stay
ahead in such an increasingly competitive market. The primary reason for emergence of Six Sigma was as a
result of the need for Motorola to really turn the company around. Six Sigma has undergone a lot of evolution and
continuous improvement which involves emergence of concepts and tools developed in the 20th century that are
considered important.

 
Phases of Evolution
•First the development of the plan-do-check-act in the 1920s
•Development of statistical process control in the 1960s
•Development of quality circles in 1980s
•Total quality management and ISO 9000,
•Baldrige Criteria for Performance Excellence

PDCA, SPC and Quality Circles

PDCA
In 1920s, Plan-Do-Check-Act was developed by Walter Shewhart, a physicist working at Western Electric Bell
Laboratories, which was then later improved by Deming. The Plan-Do-Check-Act cycle is an iterative problem-
solving loop which is used as a basis for nearly all continuous improvement approaches.

SPC
Statistical process control (SPC), is a method that uses statistically-based tools and techniques to manage and
improve processes. The technique of SPC originated from the work of Walter Shewhart in the 1920s. Walter’s
Control Charts were invented later as a way to control process variation. The engineers at Toyota devised a
methodology known as a Toyota Production System (TPS) between 1948 and 1975. TPS was clearly a prototype
of Lean manufacturing, with a system that focuses on eliminating waste, reducing inventory, and reducing
processing time.

Quality Circles
Toyota engineers also invented quality circles, which are independent teams of six to eight workers who meet
periodically to identify opportunities for improvement and then submit those ideas to management. Steps
involved include Organization, Training, Problem Identification, Problem Analysis, Solution and Presentation.

Total Quality Management (TQM)


Total quality management also referred as TQM was invented in Japan in the 1950s. Until 1980s, TQM did not
achieve prominence in the U.S. The ideology behind total quality management was to develop comprehensive set
of management practices and tools with the focus on meeting customer’s expectations. The emphasis of the
process of TQM involves measurement and controls with an aim to involve all employees in process
improvement throughout the organization.

ISO 9000
ISO 9000 is considered as a set of standards that helps businesses organize to produce high quality products
and services. Companies which are certified as ISO 9000 compliant can conduct business locally and
internationally with assurance of meeting agreed-upon quality levels. Total quality management and ISO 9000
together have helped companies really reduce wastage and improve their quality.

Malcom Baldrige Criteria


Since there were still many U.S. companies at that time that didn’t wanted to spend the time and money on
improving quality and efficiency. And this is when the U.S. Congress established the Malcolm Baldrige National
Quality Award as a way to inspire companies to achieve that higher quality by giving them this recognition. This
award is generally given annually to companies that exemplify innovation, excellence, and world-class
performance based on seven different criteria. The standards for this award are published by the National
Institute of Standards (NIST) to help provide guidance for companies that seek to establish excellence in
performance management.

Benchmarking
During the late 1980s and early 1990s, Benchmarking and Re-engineering were also developed with the intense
work and innovation in quality sciences.

We define benchmarking ‘as a strategy to improve the organization by adopting best practices of industry leaders’.
Benchmarking primarily helps the organizations identify what it does well and what need improvement.
Benchmarking also assist organization follow and adopt the leaders’ processes in order to achieve superior
performance levels. We cannot say that benchmarking solely influenced the evolution of Six Sigma, but
benchmarking has been widely accepted as a methodology within Six Sigma in order to set improvement goals
and identify best practices and improvement solutions in order to deliver superior products and services.
Re-engineering
Re-engineering is considered a method for streamlining the organization. In the 1990s, most organizations had
built their organizations based on the traditional pyramid structure which meant they had top heavy management
and populated with territory guarding and empire building executives that led to rigid divisional boundaries that
confined the status quo and stopped innovation.

Six sigma has several different facets to its meaning but when we talk about ‘six sigma’ here sigma represents
standard deviation, which is a universally accepted metric that allows us to calculate how much variability there
is within a process and then we can use this information to determine if that process is able to stay within
established quality limits that have been set by the customer.

In general, organizations use sigma as a way to measure the quality and performance of their processes. Sigma
scale measures defects per million opportunities, or DPMO, such that six sigma equates to 3.4 defects per million
opportunities, which is an extremely high level of quality.

Six Sigma is considered as a five-phase continuous improvement methodology that follows Define, Measure,
Analyze, Improve, and Control, also called DMAIC methodology.

DMAIC methodology primarily focuses on –


•Identifying improvement opportunities
•Analyzing the current processes
•Finding different improvement solutions
•Implementing those solutions
•Maintaining control of those improvements

As we may say, Six Sigma is also an organizational system that’s used at a strategic level. Six Sigma is
considered more than a set of metric-based problem solving and process improvement methodologies – It is a
continuous business improvement process that focuses on four key areas,

•Understanding customer improvements


•Analyzing those current products and services
•Aligning those current processes to achieve those customers’ requirements
•Ensuring stability and sustainability of the quality in the processes.

Now it is clearly important to understand the reason for an organization to invest in the Six Sigma methodology.
Some of the reasons include,

•Six Sigma aims to reduce mistakes and defects that can occur within the processes.
•Six Sigma has a very proven track record.
•Six Sigma has been globally accepted as a profitable and winning business strategy.

It has been seen that now more and more companies globally are embracing Six Sigma in order to become more
competitive. Six Sigma not only helps to improve operational efficiency but also aids in quality improvement.

It was Motorola, a giant electronics corporation, which formalized six sigma. Since then, Six Sigma has been
implemented across the globe by many organizations, including companies such as GE and BOA. Six Sigma
started in the electronics and gained momentum in the automobile industry thereafter. Six Sigma is considered
as a methodology that’s applicable to any organization, whether small or large, because it’s a business initiative
that can help all organizations become better than they currently are. Ever since the development of Six Sigma, it
has been deployed in banking and financial services, IT, health care, customer service, government, hotels, in the
hospitality industry, and consulting companies.
As a methodology, Six Sigma focuses more than just on error reduction. For Six Sigma to be truly successful, all
the employees in the organization need to adopt the Six Sigma principles. Such that Six Sigma is used as a
measure of quality, but it’s not strictly a quality initiative. A large part of the Six Sigma initiative focuses on how to
improve a process, so that defects are reduced and quality is improved, and as a result, the organization realizes
those benefits.

Then follows the benefits of Six Sigma that are widespread throughout the organization –

•The main benefit of applying Six Sigma is to create an environment of sustained success. Six Sigma is a methodology
that needs to be implemented and during that implementation, such that it becomes permanently adopted by the
organization.
•Six Sigma aims to change the structure of the organization for the better. Six Sigma also provides a realistic performance
goal that’s common across all processes and systems. And this is that goal of reaching 3.4 defects per million
opportunities.
•Key benefit of Six Sigma is that as the quality of products and services are improved, as a result, those services and
products have enhanced value to the customers such that, the enhanced value helps to capture larger sections of the
market and increases revenue for the organization.
•Six Sigma helps to facilitate rapid change and improvement within the organization, rather than just doing continuous
improvement with gradual incremental change.
•Six Sigma teams are working together with cross-functional teams that helps to promote learning new skills from the Six
Sigma methodology, but it also helps to cross pollinate these improvements between different areas of the organization
and strengthens all of the departments and processes that are involved.
•Six Sigma, helps the organization to execute strategic change within the organization, which helps to move the company
forward.

Six Sigma is a way of bringing about the necessary changes to both the actual processes and the strategic
direction of the organization. And this is where Six Sigma really helps an organization place themselves in a
position where they are poised to succeed on a new strategic path forward.

Six Sigma is considered as a business initiative which aims for continuous improvement tools and
methodologies that focuses on achieving perfection in products and services. Statistically, six sigma is a
measure of 3.4 defects per million opportunities, or in other words it refers that the products and services are
99.9997% defect free.

In general, Six Sigma is a five-phase methodology that follows the Define, Measure, Analyze, Improve, and Control,
also known as DMAIC, methodology. Let us now understand importance of Six Sigma to an organization
potentially by comparing three sigma to six sigma.

For instance, let us consider a process operating at three sigma level, which would mean 20,000 items of items
are lost per hour versus a six sigma process that would be seven items lost per hour. Or we can say that, a three
sigma process would mean 5,000 incorrect tasks performed per week versus a six sigma process of 1.7 incorrect
tasks per week.

To explain it further let us take another example of a three sigma process where there is no electricity for almost
seven hours per month, versus a six sigma process where there is no electricity for one hour, once every 34 years.
This indicates that three sigma would mean 54,000 wrong medical prescriptions per year, in comparison to a six
sigma process of one wrong medical prescription every 25 years.

As we know that if a process is repeated many times, then the outcome is going to differ slightly with each
repetition, also known as ‘variation’. For instance, if we cut a piece of fabric for making a shirt, using a pattern it is
surely going to be slightly different each time. Sometimes it might be a narrower in places or might have a
different curve to the cut. Yet the end product is still a shirt, but each one’s going to differ slightly from the next.
Now with such a variation there is opportunity for error – which means that the product might go outside of the
specification limits.
Six Sigma methodologies come into picture with an aim to reduce this variation so that the opportunity for error
is reduced, and this in turn increases quality and productivity. Such that, these process outputs are plotted as
data points around a mean or a target value. The deviation or variation around these data points from that target
value is measured as standard deviation such that from this information we can gather plus or minus one sigma,
two sigma, three sigma, and so on. The farther the point is from the target then, the higher the sigma value will
be.

In statistical terms, a Six Sigma process must have a performance level or even the data points no matter how far
six sigma is from the target value or the mean value yet it must fall within the specification limits. In other words,
99.9997% of the output is defect free which means a process like this will only have 3.4 defects per million
opportunities or less.

Formula for transfer function is given by

Y=f(X)

Such that the transfer function is used to illustrate the concept that the important process outputs, (known as
Ys), are a result of the drivers or inputs (known as Xs) within the process. This gives us our equation of Y=f(X). So
now let’s take a look at the equation a little bit closer.

The equation shows that the output variable Y is dependent, such that its value depends on the value of X, where
X is an independent-input variable and each X represents an input factor that determines or affects Y. And f
indicates that there is a relationship between the two variables in which one variable depends on the other
variable. The transfer function relates the inputs of a process to the outputs of the process such that within an
organization or a department, a process has a measurable and therefore controllable output. This output from a
process becomes input as it flows into the next level of the organization along with the output from other
processes which enables departmental or organizational level outputs.

As a Six Sigma professionals we must try and understand the relationship between the Xs and the Ys and how
the outputs of various smaller processes lead to organizational outputs. Thereafter this information can then be
put into a functional graph that identifies the inputs, outputs, and processing tasks that are required to transform
the inputs into the outputs.

X i.e., the input variable represent materials, information, and tasks that are accomplished by people using
machines or equipment. Then comes the steps in processing which includes all tasks that are required to affect a
transformation of the inputs into outputs. Finally comes Y i.e.,   outputs variable which includes the products,
services, information, data, or material used in the next step of the process. Since every process within an
organization has inputs and outputs which are arranged in such a manner that the output of one process
becomes the input of the next process.

The key input variables are those key process inputs that have a strong potential to impact process outputs.
Therefore it becomes crucial for the team members to have direct control over those key input variables. And
when a change is made to a key input variable, it usually affects at least one key output variable, also referred to
as the ‘Big Ys’. Where, these are any process outputs that fulfill or lead to the fulfillment of the Six Sigma
deployment goals. Note, that there are usually fewer Big Ys than the key input variables, but there’s always at
least one key output variable for each process. Also, the Big Ys may include very significant functional goals that
directly influence an organization’s key objectives which relates to customers’ profitability, or efficiency, quality,
and productivity. From this point of view, Big Ys are the most important for Six Sigma professionals.
The key output variables of those Big Ys are a result of the key input variables, the Xs within processes. The goal
of Six Sigma is to identify which vital few input variables influence the desired output the most.

Each phase of the DMAIC methodology can be described by how it contributes to this goal.

1. Define: The first phase of Define, the purpose is to identify and understand the big Y, and possibly the potential
Xs.
2. Measure: Purpose of the Measure phase is to measure those Xs and the big Y and begin to prioritize the vital
Xs.

3. Analyze: Within the Analyze phase, Six Sigma professionals test the XY relationships and verify or quantify the
most important Xs.

4. Improve: In the fourth phase, the improve phase, this is where the Six Sigma team begins to implement
solutions to improve the big Y and the important Xs.

5. Control: In the Control phase, the Six Sigma team monitors the performance of the important Xs and the big Y
over time.
It is also important to understand the leverage principle within Six Sigma. Some of the key aspects of the
leverage principle are –

•The key aspect of the leverage principle is to understand that all input variables are not equal and some of the input
variables affect the output more than others.
•With reference to Six Sigma, the leverage principle refers to applying changes to those vital few Xs that have the greatest
impact on the big Y.
•Major part of this leverage comes from a surprisingly small number of contributors or Xs. Improving the outcome comes
down to finding those critical few inputs that give us that leverage.

In the early 20th century,  this concept of the vital few verses the trivial many comes from the work of Italian
sociologist and economist Vilfredo Pareto. The Pareto principle is also known as the 80/20 rule where 20% of the
inputs in any process account for 80% of the influence on the outputs. Therefore, in order to seek leverage, search
for the view variables that are most influential in solving problems and operations, assembly, distribution, and
other areas within the facility. And the only way to find the vital few is to follow a structured process for analyzing
the cause and effect relationships.

Improvement methodologies
Six Sigma methodology emerged directly from mainstream quality control methods. In the 1970s, one of the
Japanese companies took over the Motorola plant that produced televisions. These Japanese corporate leaders
changed the management workflow, but they retained the workforce technology and designs. In order to survive
Motorola realized that, it really had to change.

Motorola CEO Bob Galvin, in the mid-1980s, launched the initiative known as Six Sigma, which was primarily used
to turn the company around. The concept of Six Sigma enabled Motorola to reduce product defects and returns
to 1/20 of the preceding numbers. Motorola achieved a five-fold growth in sales, with profits rising to nearly 20%
and a cumulative savings of $16 billion. The stock price gains compounded to an annual interest rate of 21.3%
and Motorola won the prestigious Malcolm Baldrige National Quality Award in 1988. Then in 1995, Jack Welch
converted GE into a Six Sigma organization and they achieved legendary results. General Electric estimated
benefits on the order of $10 billion during the first five years of deployment. After the results at Motorola and GE,
Six Sigma’s popularity really took off, resulting in organization-wide deployments in global 2000 companies, in
both manufacturing and service sectors alike.

 
Characteristics of Six Sigma Process
•Six Sigma incorporates concepts and tools from proven continuous methodologies. The DMAIC approach is based on
the plan-do-check-act (PDAC) improvement cycle.
•Six Sigma focuses on the rigorous application of statistics to control processes that originated with statistical process
control and is also done in total quality management.
•Six Sigma aims to promote the use of proven tools and techniques to reduce defects.
•In general, Six Sigma uses many of the same tools within total quality management, which is typically a smaller subset.
•Six Sigma also promotes the use of well trained, independent teams to handle well-defined projects. The use of
independent teams and worker participation was pioneered by the Japanese with quality circles and the gemba (means
going out to the place where the operations are occurring to look for improvements).
•Six Sigma aims to promote reorganization to focus on processes and qualities rather than using functional silos. Re-
engineering, TQM, and ISO 9000 all advocate a process-oriented approach to organizing for quality.

Difference between Lean and Six Sigma


Six Sigma and Lean are both methodologies that focus on continuous improvement, with slightly different aims.

Lean Six Sigma


1.      Lean is a continuous improvement initiative
1.      Six Sigma focuses on reducing defects and
that streamlines and improves processes by
variation within processes.
reducing waste and cycle time.
2.      Lean focuses on increasing the velocity of
2. Six Sigma focuses on eliminating defects
processes by eliminating barriers and waste,
through variation reduction and improving
speeding up the production processes, and
customer satisfaction,
eliminating non-value added activities.
Although Lean and Six Sigma originated as two different strategies in two different environments with different
tool sets and methodologies, they are increasingly being seen as complementary processes to reduce waste
while reducing variation. From this outlook, the combined methodologies are referred to as Lean Six Sigma. By
combining Lean and Six Sigma, organizations are able to impact the bottom line and really enhance the customer
experience.

Six Sigma pioneers


The quality theory upon which Six Sigma is based was developed by several key pioneers. Some of the most well-
known pioneers are noted for their significant contributions towards development the foundation of Six Sigma
which include – Joseph Juran, W. Edwards Deming, Walter Shewhart, Kaoru Ishikawa, and Genichi Taguchi.

 
Joseph Juran Contribution
•Joseph Juran’s ideas were instrumental in the development of the total quality management, or TQM theory, well-known
base for quality initiatives.
•His Quality Control Handbook, published in 1951 quickly became the reference for quality managers which remain the
key reference even today.
•Juran’s was one of the first people to recognize that the Pareto principle, which states that 80% of problems are caused
by 20% of potential causes. This concept applies to quality management and process improvement.
•Juran influenced top management in its move to adopt Six Sigma and quality principles in Japan.
•Juran played a key role in developing the Japanese economy during the 1950s.
•Jurans’ Institute also influenced top management in the U.S. just as he did in Japan.
•Juran’s also developed the Juran Trilogy for quality management, which consists of three basic principles – quality
planning, quality control, and quality improvement.

W. Edwards Deming Contribution


W. Edwards Deming was a leader in the field of statistical methods of quality control.

•He was influential to the Japanese, where he taught statistical methods to the member of the Union of Japanese Scientists
and Engineers, referred to as JUSE.
•Deming became popular in the U.S. much later in his career when he proposed his 14 points and 7 deadly diseases of top
management guidelines, such that these guidelines aim to create constancy in purpose. This means that an organization’s
quality effort must be focused on a single goal.
•His most notable contributions were his support of the plan-do-study-act cycle as a method for system improvement.
This method is still used today and is similar to other process improvement tools used in Six Sigma such as the DMAIC
methodology.

Walter Shewhart Contribution


Walter Shewhart was a pioneer who led the way for the use of statistics for quality management. Walter noticed,
while he was working at Western Electric Company during the 1920s, that engineers needed to reduce the rate
and occurrence of failures in order to improve the production processes and make them more economical.

•Walter described the problem of reducing errors in a process in terms of process variation, which is also the deviation
from the mean called ‘Sigma’.
•Walter described process variation as one of two types – assignable cause and chance cause variation. Assignable cause
variation is variation that can be traced back to a specific root cause. On the other hand, chance cause variation is
variation that cannot be traced back to a specific cause. Such that the idea of reducing variation through the application of
these statistical methods was the basis for statistical process control, or SPC.
•Walter was the first to use control charts to indicate where variation occurs within a process and then when we should act
on it.
•Walter also made noteworthy contributions to the development of the plan-do-check-act cycle with Dr. Deming.

Kaoru Ishikawa Contribution


Dr. Kaoru Ishikawa also considered as the father of Japanese quality control. His contributions include –

•Ishikawa was able to distinguish the Japanese approach to total quality control, which he referred to as company-wide
quality control, from the western style.
•Ishikawa’s ideas are evident even today in quality management, such as the concept of quality circles and next operation
as customer.
•Ishikawa’s major contributions to the development of quality management theory, and it’s the base for Six Sigma, is a
development of the cause-and-effect diagram. The cause-and-effect diagram is a very simple method and graphic that’s
used to identify root causes that are underlying process problems without the use of complicated statistics. The cause-and-
effect diagram is also commonly referred to as Ishikawa diagram in his honor.

Dr. Genichi Taguchi


Dr. Genichi Taguchi also known as a father of quality engineering. Dr. Taguchi was instrumental in the process of
developing quality engineering techniques to reduce cycle time, which is a large part of the Six Sigma process
improvement methodology.
 
•Taguchi well-known for his theory that manufacturing processes are influenced by external factors that he referred to as
‘noise’.
•Taguchi said that in order to improve a process and reduce the costs that are involved, manager’s needs to identify and
eliminate the noises from the process signal, because these are the vital elements of the process.
•Taguchi’s theory for quality management consists of two parts – First is a quality loss function, which is an equation
that’s used to calculate how much money is lost due to the variability in a process and other part is design robustness. And
this requires a process to be able to produce high-quality products consistently, regardless of external factors.

Six Sigma pioneers


The quality theory upon which Six Sigma is based was developed by several key pioneers. Some of the most well-
known pioneers are noted for their significant contributions towards development the foundation of Six Sigma
which include – Joseph Juran, W. Edwards Deming, Walter Shewhart, Kaoru Ishikawa, and Genichi Taguchi.
 
Joseph Juran Contribution
•Joseph Juran’s ideas were instrumental in the development of the total quality management, or TQM theory, well-known
base for quality initiatives.
•His Quality Control Handbook, published in 1951 quickly became the reference for quality managers which remain the
key reference even today.
•Juran’s was one of the first people to recognize that the Pareto principle, which states that 80% of problems are caused
by 20% of potential causes. This concept applies to quality management and process improvement.
•Juran influenced top management in its move to adopt Six Sigma and quality principles in Japan.
•Juran played a key role in developing the Japanese economy during the 1950s.
•Jurans’ Institute also influenced top management in the U.S. just as he did in Japan.
•Juran’s also developed the Juran Trilogy for quality management, which consists of three basic principles – quality
planning, quality control, and quality improvement.

W. Edwards Deming Contribution


W. Edwards Deming was a leader in the field of statistical methods of quality control.

•He was influential to the Japanese, where he taught statistical methods to the member of the Union of Japanese Scientists
and Engineers, referred to as JUSE.
•Deming became popular in the U.S. much later in his career when he proposed his 14 points and 7 deadly diseases of top
management guidelines, such that these guidelines aim to create constancy in purpose. This means that an organization’s
quality effort must be focused on a single goal.
•His most notable contributions were his support of the plan-do-study-act cycle as a method for system improvement.
This method is still used today and is similar to other process improvement tools used in Six Sigma such as the DMAIC
methodology.

Walter Shewhart Contribution


Walter Shewhart was a pioneer who led the way for the use of statistics for quality management. Walter noticed,
while he was working at Western Electric Company during the 1920s, that engineers needed to reduce the rate
and occurrence of failures in order to improve the production processes and make them more economical.

•Walter described the problem of reducing errors in a process in terms of process variation, which is also the deviation
from the mean called ‘Sigma’.
•Walter described process variation as one of two types – assignable cause and chance cause variation. Assignable cause
variation is variation that can be traced back to a specific root cause. On the other hand, chance cause variation is
variation that cannot be traced back to a specific cause. Such that the idea of reducing variation through the application of
these statistical methods was the basis for statistical process control, or SPC.
•Walter was the first to use control charts to indicate where variation occurs within a process and then when we should act
on it.
•Walter also made noteworthy contributions to the development of the plan-do-check-act cycle with Dr. Deming.

Kaoru Ishikawa Contribution


Dr. Kaoru Ishikawa also considered as the father of Japanese quality control. His contributions include –

•Ishikawa was able to distinguish the Japanese approach to total quality control, which he referred to as company-wide
quality control, from the western style.
•Ishikawa’s ideas are evident even today in quality management, such as the concept of quality circles and next operation
as customer.
•Ishikawa’s major contributions to the development of quality management theory, and it’s the base for Six Sigma, is a
development of the cause-and-effect diagram. The cause-and-effect diagram is a very simple method and graphic that’s
used to identify root causes that are underlying process problems without the use of complicated statistics. The cause-and-
effect diagram is also commonly referred to as Ishikawa diagram in his honor.

Dr. Genichi Taguchi


Dr. Genichi Taguchi also known as a father of quality engineering. Dr. Taguchi was instrumental in the process of
developing quality engineering techniques to reduce cycle time, which is a large part of the Six Sigma process
improvement methodology.
 
•Taguchi well-known for his theory that manufacturing processes are influenced by external factors that he referred to as
‘noise’.
•Taguchi said that in order to improve a process and reduce the costs that are involved, manager’s needs to identify and
eliminate the noises from the process signal, because these are the vital elements of the process.
•Taguchi’s theory for quality management consists of two parts – First is a quality loss function, which is an equation
that’s used to calculate how much money is lost due to the variability in a process and other part is design robustness. And
this requires a process to be able to produce high-quality products consistently, regardless of external factors.

Determining Project Readiness


The initiative to take up Six Sigma approach starts with an organization deciding that they need to change and
improve. This process begins with small improvement projects where there are opportunities for improvement
are identified and launched. Such that these small initial projects are typically three to six months in duration and
they’re often led by a Black Belt, though sometimes Six Sigma Green Belts might need smaller improvement
projects. As a process, these projects go through the DMAIC stages and their focus is on reducing cycle time,
errors, costs, or improving a process in other ways.

When organizations start implementing Six Sigma, there are several considerations they need to take into
account. Some of the crucial points to be taken into account are –

•It is extremely critical that top management has bought into the strategic and operational goals of Six Sigma deployment
within their organization.
•In case Six Sigma has been chosen as the path forward, then management needs to choose the most appropriate projects
from those currently within the organization that link to those strategic and operational goals.
•Note, Six Sigma initiatives may not always be suitable to all improvement projects within the organization. The best Six
Sigma projects are those that are linked to the organizational goals. Choose the projects where the benefits of applying
Six Sigma will be the greatest and they are the most influential.
•Implementing a Six Sigma initiative is a very large undertaking and management may want to start with a few smaller
projects before implementing the initiative throughout the organization. This will also help grow momentum as people see
the success of those small initial projects. Once the organization has analyzed the need for change and we’ve decided
upon a revolutionary approach, then they are most likely to start considering implementing Six Sigma or a combination of
Six Sigma and Lean.

Remember, not all projects are appropriate for Six Sigma!

First example, when we have another process or quality improvement strategy that’s being applied to a certain
project, then the application of Six Sigma to that same project may cause a conflict of interest and application
and it may not be possible to really conduct the two simultaneously.

Second example when we should not apply Six Sigma is when the current changes which are being implemented
in the project are too overwhelming to handle. Therefore by adding the additional demands of the Six Sigma
initiative, may place too much strain on the resources and the staff members that are assigned to that project.

Third example for not starting a Six Sigma initiative is when the potential gains of doing so are not worth the
costs that are involved. Six Sigma requires a complete transformation within the organization and cannot be
taken lightly. This call for a lot of time and it can also be very expensive. Therefore, Six Sigma should only be
applied where the cost can be justified by the benefits. This is most likely to happen in projects that are
fundamental to the organization as they are linked to the strategic or organizational goals.

Assessing Project Readiness


Before the implementation of Six Sigma, the organization must perform readiness assessment to determine
whether Six Sigma will yield the improvements that are necessary and also if the organization is really ready for
Six Sigma change. The primary motive behind conducting readiness assessment is to make a strong strategic
case for Six Sigma based on demonstrated need that cannot be filled by existing organizational strategies.

Three key steps involved in the process of assessing an organization’s readiness are –

•First step involves assessment of the organization’s outlook and future path. One of the key question to answer here is, is
change a critical business need now based on bottom line, cultural, or competitive needs? Some of the additional
questions that we should ask in the step include does the organization have a clear strategic course, and if so, what is that
course? Are we likely to meet our financial and growth goals? Or do we need to respond efficiently and effectively to
changing circumstances?
•Second step involves the evaluation of the current performance. One of the key question here is to answer, is there a
strong strategic rationale for applying Six Sigma to the business? It’s important for organizations to choose the best
strategy and it’s dependent upon the organization’s current performance level.

Six Sigma approach claims to achieve best results when it’s implemented by a high-performing organization.
Medium-performing and low-performing organizations should probably implement more basic techniques first to
improve the performance before moving on to Six Sigma. And low-performing organizations are generally not
good candidates for these revolutionary approaches. It’s critical for management to ensure that the performance
level of the organization or department is high enough to warrant a Six Sigma initiative.

•Third step involves reviewing the systems and the capacity for change. One of the key questions here is that whether an
existing improvement systems and capacity for change can achieve the degree of change that’s necessary to keep us
successful and competitive?

Determining Project Suitability


In order to determine the project suitability for Six Sigma projects, there are four key considerations to be taken
into account.

•First consideration is whether or not the project is important to the organizational goals. The
•Second consideration, after ensuring that the projects are linked to the organizational goals, it is important to prioritize
the project portfolio. Since within an organization there are several projects that can be chosen to make up the
organization’s Six Sigma portfolio. Such that they all have Six Sigma methods that could be applied to them, as we
cannot focus on all the projects at once. Therefore, the projects need to be prioritized.
•Third consideration is that Six Sigma teams need to consider the appropriate methodology they should apply to their
project. This involves determining which approach they should use in applying the Six Sigma initiative and which tools to
use.
•Final consideration involves determining which team the Six Sigma project should be allocated to.

Now, let’s look at an example. Let’s consider that we have a manager carrying out the first step in choosing a
project examining an existing project. She knew that the department running the project had the goal of
increasing its market share by 5% during the financial year. Achieving this means keeping existing customers
happy, because satisfied customers are the best advertisement, and they’re a positive reference for the
organization. So when deciding whether the project was suitable for Six Sigma, she made sure that it was linked
to this major organizational goal. What this means is that the project had the goal of maintaining customer
satisfaction and that the manager considered the impact of Six Sigma on this goal by asking how will the
customers be affected while changes are being implemented? And how does this effect impact the
organizational goals? After ensuring the projects are linked to the goals, the second consideration is prioritizing
the projects. There are several common criteria used to prioritize projects.

Consider Samuel, who works as a manager that has examined a project with a single large problem. The problem
has been around for some time and it’s resulted in downtime in several instances. However, there could be a
relatively easy fix to the problem if the right resources are properly focused. The manager also looked at a second
project. This project was newer and had many small problems. The project was made up of processes that were
established a long time ago and because of this, there might be considerable resistance because employees
were used to the old way of doing things. Because of these elements, the manager gave the first project a higher
priority for the application of Six Sigma over the second project. It’s important to note that both projects were
suitable, but the first project was easier to implement than the second project. The third consideration is
determining the most appropriate method.

Within Six Sigma, there are two main methodologies –



Design for Six Sigma or DFSS – Design for Six Sigma is a methodology that’s used when designing new Six Sigma
projects. Design for Six Sigma is used to create processes within projects that are compliant with Six Sigma methods and
metrics. It is also useful for redesigning existing projects an entirely new way when they’re not meeting current customer
expectations. Design for Six Sigma involves identifying the need for the project, making sure it aligns to the strategic
goals, and then designing it, optimizing it, and validating the results for the new process.

Define, Measure, Analyze, Improve and Control, or DMAIC, methodology – Six Sigma is primarily used when we
are interested in improving the existing processes. However, if we need to design a new process that’s compliant to Six
Sigma, then Design for Six Sigma is the most appropriate.
Organizational Alignment of Goals
Six Sigma projects are required to be continuously monitored for alignment with the organization’s goals. The
primary purpose is to ensure the alignment; a Six Sigma team must ensure full support of the leadership team,
ensure change management and communication, continuously review progress and alignment of the project
metrics to the organization goals, organize an adequate team and put a support system in place, and then
conduct Six Sigma training.

The first aspect of ensuring full support of the leadership team, it becomes critical that the executive team is
involved in and actively endorses all of the key decisions. It becomes extremely critical since Six Sigma projects
and the decisions involve time and money. Therefore, leaders should be directly involved to make sure that the
focus is truly aligned with the needs of the organization. Besides, involving the corporate leaders helps to keep
them in the know and provides them a learning opportunity. Also, the active participation of leadership and their
commitment to the Six Sigma project ensures that actual and perceived urgency and importance of the project is
provided to all of the stakeholders in the organization. And this includes the enterprise leadership team, leaders
of the business units, members of the improvement project team, and employees at the team level.

Change Management
Next key aspect is change management and it is important to ensure that we are taking change management and
communication into account with the Six Sigma project. For this, it is important to make sure that all employees
and stakeholders that are affected by the Six Sigma project are communicated to and with, and they are helped
through any changes that might affect them. It is suggested to make sure that that communication is two-way
between management providing a message and employees providing feedback. With that, we want to make sure
that the message is tailored to different groups. In addition, with the communication, we should think about the
media events and similar activities that are appropriate to further communicate the plan. And then finally, it’s
important to create a plan on how we will, as a team, deal with any negative reactions.
Another important aspect is to review of the continuous progress of the project and measuring the project
performance metrics. When we look at this, it becomes crucial that the metrics are reviewed at a regular basis
against target values for the project and they’re also reviewed against the organizational goals. In addition, a key
part of this is making sure that the metrics are stated in a way that’s clear to all stakeholders. As part of the Six
Sigma initiative, it’s also important to make sure that we have an adequate team and a support system in place.
The Six Sigma initiative should be supported by a project team, a Champion, a Six Sigma Master Black Belt, and a
Six Sigma Black Belt, along with the appropriate executive leadership. Once the project team is identified, it’s
important to define the appropriate roles for the project team members and clarify their responsibilities. These
decisions are driven by a variety of factors including the Six Sigma objectives, implementation plan, budget, and
existing staff and resources.

Typical roles within a Six Sigma team includes –


•The Sponsor or Champion or Process Owner;
•The Implementation Leader, which is typically the Master Black Belt;
•The Six Sigma director or Quality Leader, who could also serve as the coach, such as the Master Black Belt or Black
Belt;
•The Team Leader, which could be a Black Belt or a Green Belt, depending on the level of difficulty of the project;
•The team member, which could be a Green Belt or a Yellow Belt.

Final key aspect is to ensure that we are delivering timely and appropriate training within the organization on the
Six Sigma tools and techniques. A continuous improvement initiative requires that members and stakeholders
have the necessary information and skills to constantly gain new insights.

Some of the few essential aspects of Six Sigma training involve –


•Delivering hands-on learning by putting concepts and skills into immediate practice and application
•Providing relevant examples and links to the learning that reflect challenges that are affecting the business
•Catering to a variety of different learning styles, adapting the Six Sigma rigor into training modules that the employees
can easily digest
•Creating champions and ambassadors for wer project out of those training events
•Considering ongoing training to keep everything fresh and relevant within the company.

Understanding the Business processes


Primary focus of any continuous improvement effort is on the process. Dr. Juran defined a process as “the logical
organization of people, materials, energy, equipment, and information into work activities designed to produce a required
end result – a product or a service.”
Business processes holds three main characteristics.
•Series of events that produce outputs.
•Processes that are defined through numerous steps.
•Beginning and endpoints of a process are marked by boundaries.

Such that processes are linked together in a business system. The business system is the ‘overarching method or
procedure for process execution’. The business system ensures that the process receives the resources it needs at
the right time. Where, the process is the heart of the system, and it’s the focus of Six Sigma continuous
improvement. A process can then be further divided into sub processes, and those sub processes can further be
broken down into steps, such that a step is the smallest unit of work that can be measured.
The SIPOC diagram is a way to identify various components of a process in an organization. The acronym SIPOC
stands for Suppliers, Inputs, Processes, Outputs, and Customers.

A SIPOC diagram is used to identify the various components of a process within an organization.

The SIPOC diagram helps to identify the start and the endpoints of the process. These components of the SIPOC
diagram provide a high-level view of the process, showing all of the components and the boundaries. This helps
the Six Sigma teams identify areas that require improvement.

•Identify the
Suppliers – The suppliers are the people, departments, or organizations that provide the materials, information, or
resources that are worked on or transformed within the process.

Inputs are the materials or information that is provided by the suppliers – The inputs are transformed, consumed, or
otherwise used by the process.

Process is a series of steps that transforms those inputs into outputs

Outputs are the products or services that result from the process that was performed on the inputs

Customers are the people, departments, or organizations that receive the outputs from the process.
We now consider the functionality of how the input process output feedback system works. The inputs are the
data, opinions and ideas, or orders that come from the suppliers. This could come in the form of raw materials
that are needed for the next step, which is the process. The process should include those seven or eight steps
that transform the inputs into the necessary outputs. The outputs are the products, services, or training, or
designs that the customer has requested. With the input process output diagram there’s also a way for feedback
to come from the end customer back to the outputs or the inputs and also feedback from the outputs to go into
the inputs. This feedback loop is an essential step of the input process output diagram to ensure that we can get
the information from the products and the customers to further improve the process with the inputs.

Core Business processes


Core business processes are the processes considered essential for creating value for the business. Such that all
other processes are considered support functions, which provide vital resources or support to sustain those core
processes. This classification of processes as core processes can vary from organization to organization.
However, in almost all the organizations there are some processes that are considered core in the majority of
organizations including marketing, sales, service, product development, and product creation. Such that all other
processes are considered support processes that provide vital resources or support in order to sustain those
core processes. For example, purchasing and human resources can serve the entire supply chain or a large
portion of it, but they’re not the main business of the organization.

One of the major step in process improvement is identification of core processes. Six Sigma is typically only
applied to core processes or a combination of core and support processes. Since any effort towards
improvement of wrong process can result in a waste of both time and money.

In order to identify whether a process is generally core if we can answer in favor (yes) to three key questions –


Does the process cross multiple departments?
Core processes use the talents of people from different departments in order to produce the desired output.
Support processes, however, generally only occur within a single department.


Does the process generate revenues for the organization?
A core process has the potential to generate money for the company. A process may also be core if it helps to
retain customers, or if it produces a product or a service.


Is the process customer-focused?
A core process is focused on the external customer. Customers may be companies, distributors, or consumers.

Process Analysis
The process of analyzing processes calls for improvement opportunities, but it is extremely crucial to stay
focused on the goals of the organization and the core processes. In case an organization loses sight of the big
picture, they may improve the wrong processes and find themselves in tight spot.

One of the key protocols is ‘marketing’ which acts a mean to gather information on customers’ wants and needs
by the organization. Marketing acts a medium through which information about the organization’s products and
services reach the customer. The marketing information acts as an input to all of the other core processes and
marketing personnel receive output directly from all of the other core processes. On the other hand sales and
service is a key portal for the direct exchange of information with customers. The sales and service staff
interacts with the marketing staff and with the product development personnel. This then leads to the product
development staff which interacts with employees in all of the other core processes. In which case, the
information is exchanged about products, services, features, and functions with personnel in product creation,
marketing, and sales and service.

Business drivers
Organizational Business Drivers can be defined as those driving forces that influence every aspect of work that is
done by an organization. Business drivers are considered as one of the highest level of performance measures
within an organization, for instance financial measures or performance measures. Business drivers form the
backbone of any business effort to improve customer, operational, and financial performance. Business drivers
are essential key to the success of an organization. Without proper business drivers work can become vague and
unfocused since it would lack motivation goals to drive the progress within the organization.

We can say that business drivers almost drive everything in the organization from management decisions to
employee actions, which is why business drivers are such an important element of Six Sigma framework. The
effort made towards organizational improvement must be driven by the needs of the business and not alone the
improvement strategies themselves. One should not confuse business drivers with organizational goals, yet they
are strongly linked to them. To differentiate the primary aim of the business drivers is to stimulate growth in the
areas that are most important to the organization’s overall success. Having organizational business drivers helps
the Six Sigma teams to select the right projects and align their efforts to support one or more of these drivers.
Some examples of important business drivers include – Profit, Market share, Customer satisfaction, Efficiency,
and Product differentiation.


Profit: It is shown as net income on the income statement. Profit is considered as a measure of all transactions, even those
which have not yet had cash collected as revenue or paid as an expense.
Net profit = Gross profit – Organizational expenses incurred during business operations and the taxes it owes
.

Market Share: Market share is a portion of sales of a certain product or service in a given region that’s controlled by the
company. Whilst revenue growth allows a company to calculate growth internally, market share gives an indication of
how a company is doing in comparison with similar entities. Market share accounts for the market variables and
performance from year to year. The market share provides information about the financial health of the organization.
Market Share =                                                                                                                                                                         


Customer Satisfaction: It’s been observed that keeping existing customers is more profitable than winning new ones,
and the longer that we can keep customers around, the more profitable we become and they become. This is why
customer satisfaction is one of the key business drivers within Six Sigma and other continuous improvement initiatives.
The best way to keep customers is to meet their expectations. The first step within Six Sigma improvement project
involves developing an understanding by the team of a customer’s requirements and expectations. This helps the team to
determine how the company can meet those needs. All the organization implementing Six Sigma continuously measure
customer satisfaction as it helps the organizations to manage their customer relationships and use this measure to achieve
continuous improvement. Organizational efficiency represents how well a company uses its resources to achieve their
strategic goals. In financial terms, organizational efficiency lies in an organization’s ability to optimize its bottom line
based on capital acquired, either through equity or debt.

Organizational Efficiency: Improved organizational efficiency surely leads to lower costs, optimal resource allocation,
and greater agility and sustainability of the organization. Various ways can be used to improve organizational efficiency
such as deploying quality systems, implementing process improvement and continuous improvement efforts.

Product differentiation: It is a business-level strategy with the contention to increase the perceived value of product or
services compared to the value of the competitors’ products or services. The process of product differentiation helps in
creating a customer preference for the organization’s product or services. Some of the crucial factors with an aim to
provide differentiation include product features, superior quality, a refined delivery channel, and a specific marketing
approach. Organizations that use product differentiation as a driver typically seek to be unique in the industry, along all of
the dimensions that are widely valued by the customers. An organization can therefore exploit its advantage effectively,
for which the customers may be willing to pay a small premium for that uniqueness, leading to increased organizational
profitability.

Organizational Metrics
The process of evaluation involves numerous possibilities to choose the right metrics. The most crucial part is to
choose the most appropriate metric for a specific project or process, as it may involve wastage of time in
collecting data for metrics which may not be useful or appropriate. Different metrics may be more appropriate for
use at different levels of the organization or for specific processes. Some of the most important characteristics
which a good metrics must have in common are –

•All good metrics are developed in conjunction with the employees who have the best knowledge of the process being
measure.
•Good metrics should also be linked to the organizational goals.
•Different types of metrics are used to address different goals at different levels of an organization.

There are primarily three types of metrics – Business metrics, Operations metrics, and Process metrics.


Business Metrics: Business metrics are typically used to measure the financial aspects of an organization or quantify
high-level aspects of the operations. Some of the metrics used at the business level include return on equity, earnings per
share, growth-to-expenses ratio, and return on investment. These metrics are usually reported in financial reports or
operational status reports.

Operation Metrics: Operation metrics helps to measure the aspects of the operational activities within an organization,
with an aim is to improve and guide the management of these operations. These metrics give a high-level overview of
how operations are running – for example, percentage of returned products.

Process Metrics: Process metrics provides detailed information about the frontline production processes. Process Metrics
are typically used to monitor production and the machinery. Such that all the processes within an organization are linked
to some form of a system. For instance, the output of one process might be the input to another process due to which all
the metrics are also linked. Since all the processes have inputs and outputs of different types, therefore these inputs and
outputs are in the form of variables. Process inputs are normally the physical materials and labor that are needed to
complete a process which might include controlled and uncontrolled variables, and internal variables. Such inputs are
categorized as key process input variables.

Key Process Output Variables


Key process output variables are those important outputs of a process, which may vary at the different levels of
an organization. The key process output variables at the strategic level generally include profits, customer
satisfaction, and quality. Whereas at the operational level, key process output variables would include the number
of defects in a product.

Different types of metrics are linked to different types of goals within an organization.


Corporate-level goals are the strategic aims of an organization. ‘Business metrics’ are also referred to as strategic metrics
and these are commonly linked to corporate goals. For instance, an organization sets a goal to increase overall revenue for
the company by 25%. Business metric in which case is one of the most suitable metrics to measure this could be profit,
revenue, or return on investment.

Department-level goals include increasing the Sales Department revenue by 15% to meet market expansion goals. This
type of goal is quantified by ‘operations-level metrics’ such as percentage of sales per year.

Tactical-level goals aim at the functional level of production. For instance, the goal could be to reduce a cycle time for a
key helicopter component from four months to 1 month. This type of goal is quantified through process-level metrics such
as cycle time. Now let’s take a look at an example.

Case Study: Let us consider a large aircraft engine manufacturer, with specific metrics linked and established to
different levels of goals. Let us consider that
•Strategic goal of the company is to achieve 99% customer satisfaction rating in all of its production lines. Therefore the
metric associated with this is the number of customers that have purchased more than one product from the organization.
•Next the business or department goal aims to reduce the defect level to 5.5 sigma. Therefore the metric is used to
measure the number of defects per million opportunities.
•Then the process or tactical-level goal of the company is to have 75% of the managers and supervisors complete Six
Sigma Green Belt training. The metric associated with this then is the number of employees that have been sent for
training.

Characteristics of Good metrics

Some of the attributes of a good metric include –


•Good metrics aims to measures performance over time.
•Good metric must indicate any emerging trends and isolated measurements at static points, and a process just can’t do
this.
•Good metric must give direct, clear, and unambiguous information about the process that we are trying to quantify. It
enables managers to see the process at a glance or in the larger context of emerging trends within the processes.
•Good metrics must to be linked directly to an organizational performance goal.
•Primary function of a metric is to give an indication of whether a goal is being achieved or not.
•A metric should be practical which means that the employees that are most familiar with the process for which that
metric is being developed are involved in developing the metric.
•Metric needs to be easy to collect and use.
•Metrics also need to operate over time so that they can indicate trends.
•Metrics also need to be flexible enough to change and adapt as the processes change.
•Metrics must also address all of the relevant performance goals that are linked to that specific process or project

For instance, while designing a measurement system of metric, several considerations needs to be taken into
account. Since there are typically many metrics that are used to evaluate large projects therefore each one
should have a different person overseeing its collection and use. If one person is responsible for too many
metrics, then they can become confused, and that extra work associated with collecting metrics may also mean
that the quality of the data being collected is sacrificed.

The metrics need to be selected so that the past history of the process is considered, as well as the present
operating conditions and the potential future direction of the process. While designing the system of metrics, the
managers and analysts also need to consider that processes are not static. They will change and evolve over
time. Several metrics working together give a much better picture of how well goals are being met, rather than
numerous unrelated metrics. And lastly, the perspectives of all the stakeholders involved in the process need to
be considered when choosing the metrics for that measurement system. Since, what may be important for one
type of stakeholder maybe uninformative to another, and therefore different levels of metrics should be
incorporated into the system.

Balanced scorecard
Balanced scorecard was developed in the 1990s by Dr. Robert Kaplan, a professor at the Harvard Business
school, and Dr. David Norton, president of the Palladium Group. We can define balanced scorecard as ‘a
management system that assists in aligning the metrics of an organization to their strategy and goals’. In terms
of Six Sigma, the balanced scorecard helps to link the Six Sigma goals to the organizational vision. Balanced
scorecard is considered as a way to view an organization from different viewpoint, and then managers can
develop or select metrics that adequately evaluate the organization from these different business perspectives.

Primarily there are four different perspectives that a balanced scorecard addresses –

•First perspective is the financial performance, and this is a traditional way of viewing an organization in terms of profit
and financial status. Some of the typical metrics linked with this perspective are net income, return on investment, market
share, and cash flow.
•Second perspective is customer. It’s important to view the organization with a customer in mind, because this gives a
different perspective when choosing metrics. Therefore managers need to understand who the customers are and what
their needs are in order to meet the customer-oriented goals. The primary metrics typically associated with this
perspective are customer retention, customer satisfaction, number of complaint logs, and customer loyalty.
•Third perspective is the internal business process. Now viewing a company in terms of its internal business process is
crucial for establishing how well those processes are running. Metrics from this perspective include project management,
throughput, and Six Sigma.
•Fourth and the final perspective are learning and growth. It is extremely imperative to view an organization from the
perspective of learning and growth, since many knowledge workers have a goal of continuous learning. Metrics that are
typically associated with this perspective are staff morale, training levels, amount spent on training, and extent of
knowledge sharing.

Example illustrating use of balanced scorecard with a Six Sigma project


Let us assume that the Jason is a chief operations officer of a company that manufactures aluminum cans is
using a balanced scorecard approach to link the goals and metrics on the latest project in the Beverage
Department. Now the first area on the scorecard that he considered was the financial performance area. Such
that the goals for the new beverage project were already established. Where the financial goal was to provide the
cheapest product to the market and undercut the competition by at least 5%.
Thinking about the metrics from this perspective, he chose cost per unit as the metric. The next perspective that
he looked at was the customer perspective. The customer in this case was a large beverage manufacturer and
customer satisfaction was a primary consideration here. The COO therefore chose to measure this using a survey
completed after the project was delivered. Then when the COO considered the new beverage project from the
internal business process viewpoint, she decided that sigma levels of the production process needed to be
considered to ensure proper quality levels. This metric addressed the goal of achieving six sigma in internal
production processes and it also linked to the goal of satisfying customers, which is an associated metric. The
final perspective that the COO approached for the new beverage project was a learning and growth perspective.
To measure the training effectiveness, she chose the metric of the number of people trained in Six Sigma
methods. This metric gives an indication of how capable the organization is to handle the new project and to
achieve the process goal of Six Sigma.

“The concept of lean refers to the continuous improvement initiative that aims at streamlining processes and reducing
wastes.”
Lean originated in the Japanese manufacturing industries in the 1980s as a waste reduction and improvement
methodology. However these methods and principles of Lean concepts have spread to logistics and from there
on to military and construction industries as well. Lean methodologies and principles have since been applied
successfully across many other industries, including service and transactional industries. These industries
include healthcare, insurance, financial services and banking, call centers, government, and retail and
transportations among many others.

Lean methodology incorporates a powerful set of tools and techniques designed to maximize customer value
while reducing waste along the entire value stream. The concept of lean focuses on improving the overall
efficiency, quality, and customer satisfaction. Lean is a preferred strategic choice for many business
organizations due to its ability to improve customer satisfaction and deliver bottom line financial gains to
organization. In the process of implementation an organization moves from identifying waste to eliminating
waste through four stages of the Lean methodology.

Typical Lean Methodology strategy consists of the following four stages –


•Identification of the opportunity
•Design of the solution
•Implementation of the solution
•Continuous improvement.

First stage – Identification of the opportunity


Several important events occur to prepare the employees to identify improvement opportunities. This includes
training, understanding of value as defined by the customer’s needs and expectations, and then identifying
opportunities that exist with the current processes.

Second Stage – Design of Solution


Once the opportunity is identified, then a solution is designed. For example the processes and flow that produce
each finished product or service are identified. Thereafter the non-value added activities are pinpointed, examined
and slated for elimination, if required.

Third Stage – Implementation of the Solution


In the third stage of solution implementation, the teams decide which improvements to address first. Most of the
project teams begin with focusing on the low-hanging fruit. Such improvements involve the employees directly
without management involvement, and can typically be made quickly and cheaply. These are mainly considered
as processes that have highly visible waste to the organization.
As a team it becomes important to really identify those opportunities that exist and prioritize which
improvements to address first.

Fourth Stage – Attaining Perfection through Continuous improvement


The final stage of implementing lean methodology strategy is to attain perfection through continuous
improvement. In which case employees who have been trained to think about their work as processes and have
been empowered to suggest and enact improvements should take ownership of their processes. Then the team
members should continue to troubleshoot to find new ways to eliminate waste and monitor the processes to
ensure that the improvements are sustained. Simultaneously, the team should aim to identify new ideas for
waste elimination and act upon these.

The Lean tools fit nicely into the Six Sigma DMAIC Methodology. In general, the Lean concepts there are tools like
– Theory of Constraints, Value chain flow and Perfection that are used throughout the Six Sigma DMAIC
Methodology for operation within an organization.

Primarily in the improvement stage of the Six Sigma DMAIC Methodology, the tools that focus on waste
elimination, reduction and prevention, and efficiency improvement are the point of focus. These include tools
such as Muda, or the 7 wastes, Value Stream Mapping, Pull systems, Kanban, 5S, Standard work, and Poka-Yoke.
In the Control phase of the Six Sigma DMAIC Methodology, the Lean tools that focus on process control, such as
Total Productive Maintenance and visual factory or visual control, are used to control the improved process.

Lean versus Six Sigma


While we know that Lean and Six Sigma methodologies both focus on continuous improvement, there are several
key differences between the two methodologies. Some of the crucial points of difference are listed below –

Basis Six Sigma Lean

Six Sigma focuses on Lean focuses on increasing the


 
eliminating defects through velocity of processes by
variation, reduction, and eliminating barriers and waste,
 
improving customer satisfaction. speeding up the production
By using DMAIC methodology processes, eliminating non-value-
 
and sophisticated statistical added activities, and reducing
tools. inventory.
Focus
Six Sigma follows the five-phase Lean follows four steps to
methodology of Define, identify the problem, design a
 
Measure, Analyze, Improve, and solution, implement a solution,
Control, or the DMAIC and then focus on continuous
Methodology
methodology improvement
Six Sigma uses advanced
statistical tools like design of
  experiments and hypothesis Lean relies on pull scheduling,
testing. setup reduction, and the 7 wastes
Tools  

With Lean, the assumption is that


 
as we are removing waste, we are
With Six Sigma the assumption improving the business
 
is that as we reduce variation, continuously, and small
we’re also reducing defects. improvements are preferred. This
 
is done through data and analysis
support improvement
Assumptions
Despite these differences, Lean and Six Sigma are very similar and complementary.

•Both methodologies were inspired by the plan-do-check-act cycle (PDCA).


•Both use systematic problem solving tools.
•Both depend on project teams and specialists for implementing and supporting the process improvement efforts.
Typically Six Sigma relies on this a little bit more heavily than Lean.
•Both Lean and Six Sigma require employee participation which often results in widespread behavioral and systematic
changes as well as a happier and more engaged workforce.
•Both Lean and Six Sigma deliver significant bottom line benefits.

Review of PDCA Cycle



PLAN: The first step involves problem identification which involves determining the cause. Then a plan is made for
solving the problem, which is done with the participation of employees and management.

DO: The second step, involves communicating the improvement plan to all employees who take responsibility for
achieving the objectives. If the workers face any difficulties, or if the plan is not working properly, then those differences
will be identified in the check phase of the cycle.

ACT: Finally in the act phase, when the necessary results are achieved, then they are documented and standards are set in
order to prevent the problem from reoccurring. The plan is iterative, so if the results are not achieved the plan is repeated
until they are.
Process of integrating Lean with Six Sigma
As we know that ‘Lean’ is a continuous improvement initiative that streamlines and improves processes by
reducing wastes and cycle time. Even though Lean and Six Sigma commenced as two different strategies and
two different environments with different tools and methodologies, yet they are increasingly being seen as
complementary processes. From this point of view, the combined methodologies are commonly referred to as
Lean Six Sigma.

The primary motive of Six Sigma is to help organizations reduce defects and improve quality. Where on the other
hand lean concepts helps reduce waste and improve process flow and speed. Due to such complementary nature
of Lean and Six Sigma, many corporations plan to incorporate the Lean approach into their overall Six Sigma
strategy. This has led to building a body of knowledge about how to best use the combined strengths to execute
organizational strategies and gain and keep competitive advantage. Indeed any wasted time and effort translate
into lost business opportunities and revenue. The Six Sigma methodology and the Lean initiative allow
improvements to be made across an entire organization to address these problems. This results in errors being
reduced, quality is improved, and businesses perform faster.

Certainly the benefits of implementing Six Sigma are typically larger and have more of an impact than most other
business improvement strategies. The combined Lean Six Sigma methodology involves using Lean
methodologies to identify and remove non-value-adding activities and processes and then applying Six Sigma
methodologies to identify and eliminate process variation.

This union of Lean and Six Sigma is essential as Lean alone cannot bring processes under statistical control and
Six Sigma alone cannot dramatically improve process speed or reduce wastes in the processes. Therefore the
integration of Lean and Six Sigma tools and processes, helps in creating a powerful combined methodology to
improve quality, efficiency, and speed in every aspect of a business. Practitioners constantly debate about the
process of continuous improvement and how to define the relationship between Lean and Six Sigma.

Lean focuses on increasing the velocity of processes by eliminating barriers and wastes. This speeds up production
processes by eliminating those non-value-added activities and reduces inventory.
  Six Sigma focuses on eliminating defects by reducing process variation which results in improved customer satisfaction.
The process variation is understood and brought under control by applying the DMAIC methodology, an advanced statistical
tool.
The process of improved and waste reduction begins by understanding and identifying which tools to use, Lean
or Six Sigma. This involves choosing certain criteria that will help us choose whether Lean or Six Sigma tools will
work better for a given situation.

First Criteria – Time frame and financial commitment


Lean is considered suitable when a quick and relatively inexpensive strategy for reducing time is required.
However, in case it is feasible and worthwhile to apply rigorous statistical tools to uncover root causes of a
problem, then Six Sigma might be more appropriate. Lean tools generally provide fast results while Six Sigma
tools require a longer time frame.

Second Criteria – Nature of the Problem


The first point to consider in this case is to understand what is a primary problem that is needs to be redressed?
Is it waste or speed, or is it defects and variation in processes? Here, Lean tools help reduce waste and increase
speed, while Six Sigma tools generally address defects and variation and processes.

Third Criteria – Capacity for change


In this case we are trying to ask how quickly an organization can change in response to a new improvement
methodology. Lean tools can be implemented while people are still getting used to a new way of thinking.
However Six Sigma tools usually require a longer time frame.

Fourth Criteria – Pervasiveness of the problem


In case the problem is isolated and relatively simple to fix, then Lean tools might be more appropriate. But if it
infuses the entire process, then Six Sigma might be more appropriate. Since Lean tools are more helpful for
addressing low-hanging fruit, problems that are relatively simple to fix while Six Sigma tools and techniques are
used for addressing more pervasive problems.

Infuse Lean and DMAIC Methodology


We in this section try to understand how the Lean tools line up with the five stages of the DMAIC methodology.

•In general in the Define phase, the project team is concerned with defining the problem, the project goals, and customer
deliverables. In which case, the primary required deliverable in the Define phase is the process map. Now when we
consider a Lean Six Sigma methodology, the process map is replaced by the value-stream map. That is a current state
value stream map can be used in the Define phase and a future state value steam map can be used in the Improve stage.
•In the Measure phase, we measure any variation in the process of implementation. Here Lead time measures the time
between the initiation of a process and the completion of a process, where Takt time is the amount of time available to
produce a unit of product as required by the customer. So any variation between these two times indicates a problem.
•In the Analyze phase, the value-stream map could be used to look for process improvement opportunities.
•In the Improve phase, the project team analyzes and determines the root causes of the defects. The team could use the 7
wastes tool to quickly focus on reducing waste that’s found to be a key issue. Also the 5S methodology can help improve
a process and eliminate waste through organization, cleanliness, and workplace standardization. Kaizen is a method for
continuously improving a process through incremental process improvement, rather than a leap caused by reengineering
or redesigning a process. Using the kaizen philosophy helps to maintain those improvements.
•In the Control phase, Lean Visual factory tools and total productive maintenance are useful tools for controlling future
performance. Visual factory tools such as charts and schedule boards help maintain control over processes.

Five Laws of Lean


The lean concepts and tools have been applied in the manufacturing and service sector equally. In the process of
lean application and service industries, the key concepts of Lean are adopted even if there is a flow of intangibles
and services instead of materials and products. Let’s take some examples
•Ford Motor Company, a global automotive manufacturer, who tried to eliminate waste in the process of manufacturing,
supply chain, and service sector using Lean Six Sigma.
•Standard Life (UK-based Investment Company) improved customer satisfaction by transforming service and
transactional processes through the application of Lean concepts and tools.

The Lean concepts uses various tools that come along with several benefits to an organization to increase safety,
capacity, yield, performance quality, and the level of team integration with an ultimate aim to boosts profitability.
It improves the overall work practices thereby ensuring high-quality products, to boost employee performance
and customer satisfaction, and overall costs and inventory costs are lowered. Some other benefits include
reduced lead and cycle times, improved efficiency and quality, improved communication between organization
and customers, reduced physical work space and facility requirement and facilitating process flexibility.

As per the companies primarily there are five lean laws that have been adopted to guide managers in their
approach to the initiative. We shall discuss them one by one –


Law of the Market: In this law the customer is the highest priority. As it has been rightly said that customers are the
target market, and without them we have no business. All aspects of the organization and its processes should be focused
so that they are focused toward keeping the customer satisfied.

Law of Flexibility: The law states that the velocity or speed of a process is proportional to the flexibility of that process.
This indicates that the speed at which a process can run is directly related to how flexible and adaptable to change that
process is.

Law of Focus: The law of focus says that 20% of activities cause 80% of the problems or delays in a process or
operation. This is the Pareto principle in action.

Law of Velocity: The law states that the velocity of any process is inversely proportional to the amount of work in
progress. This is a very useful indicator of lead time because the more work in progress we have, the slower the process
and the longer the lead time.

Law of Complexity: The law states that complexity adds more non-value and cost than either poor quality or slow
process speed. Essentially it means that complex products create more waste and less value than poor quality products or
a slow process speed. Therefore keeping the product or service as simple as possible is vital to achieving Lean.
Lean concepts
The fundamental concepts behind the Lean way of thinking can also be thought of as the steps in the application
of the Lean philosophy – each one builds on the last. Therefore, in order to apply the lean principles in the
organization successfully, the management needs to adopt the four key concepts of Lean which includes –
Value, Value Stream, Pull, and Perfection. We now discuss each of the concepts –


Value: This is the first concept i.e., value which is ultimately defined by the customer. Value is a measure of how well a
product or service meets the customer’s needs. The primary aim of the lean approach is the shift to focusing on the
customers and their requirements, rather than on the internal processes. Consequently, the management must take its first
step towards applying Lean in an organization is in order to identify what creates a value based on what the customer
determines is value in the products or processes that we offer. A key aspect of this is good communication, because this
facilitates the transfer of information between the customer and the organization to indicate what the customer’s value.
Communication between the customers and the organization should be ongoing to ensure that any change in value is
noted as soon as possible. Also changes in processes and products can be made to meet those changing needs of the
customer allowing the organization to stay ahead of the competition.

Value Stream: The second key concept of the Lean philosophy is the value stream. The value stream represents all the
activities and processes that are involved in producing a final product. This includes the value-added and the non-value-
added activities. The value streams can be traced for each product or service offered by an organization. The value
streams involve the suppliers, the organization, and the eventual customers or the consumer. It is essential to note that all
steps or activities do not add value called waste steps. And the fundamental goal of the Lean is to eliminate these waste
steps from processes in the organization. This process of identifying the value stream allows managers to recognize where
process improvements should be made, and where the value stream can be optimized by removing those waste steps.
Removing those waste steps may mean that a process needs to be restructured so that it flows more smoothly.

Pull: In this third key Lean concept applying Lean in an organization means shifting to a system where customers pull
products through the value stream. This indicates that the organization makes what the customers actually request as they
ask for it, rather than producing a set amount of goods based on sales forecasts and market predictions referred as pushing
products to the value stream. For instance, rather than producing 500 computer motherboards based on sales forecasts, a
large computer parts manufacturer changed to a pull system where they only manufactured motherboards as customers
order computers. This means that the motherboards are manufactured and immediately placed in computers and shipped
to customers. Essentially every motherboard that is manufactured has already been sold to a waiting customer.

Perfection: The fourth and the last key concept of Lean is perfection. Every organization must aim towards constant
improvement in processes and products. This involves reviewing processes and customer needs to strive for the ideal
product or service as required by the customer. This means that once products with a value are being pulled through the
value stream, the processes involved are reviewed and optimized in an ongoing cycle of improvement.
Five Steps in the Lean Process
In order to reach the destination, we require a process map. A five-step process has been created to help we
implement and benefit from Lean implementation. First, we identify value and map the value stream. This
involves creating flow, establishing pull, and finally seeking perfection. The final step then leads back to the first
step, creating a cycle.

We shall now discuss the five steps involved in the lean process –

•Identifying value involves finding out what the customers actually want and how much it’s worth to them.
•Mapping the value stream involves outlining all of the processes, tasks, and information that is needed to develop a
product or service and deliver it to the customers.
•Create flow, to remove interruptions, bottlenecks, and periods of inactivity from the value stream.
•Establishing pull involves creating a system in which customer demand drives production
•Seek perfection (continuous process) for finding ways to improve on the first four steps. First, we should define value for
each product or service the organization offers. This gives us direction to the entire organization’s work processes,
determining what, how, and whether it should provide specific goods and services.

In the process of lean thinking, the customer defines value where the customer could be a person, team or a
company using companies’ product or services. The value assigned by a customer assigns to a product depends
on factors such as the product’s quality, capabilities, and price and how well it meets the customer’s
expectations. For instance, customers of a casual clothing retailer may place a high premium on variety, style,
and affordability; whereas the customers of a more upmarket retailer may assign greater value to quality and
personal service. In addition, customers of a healthcare provider are likely to value rapid service, accuracy,
professional expertise, empathetic treatment, and a tranquil environment. Once we are able to identify value, we
map the organization’s value stream including all of the steps that are used to bring a product or service from
conception to delivery to the customer. The primary goal in performing this task is to assist in identifying and
eliminating waste in the process. Waste involves any steps that did not produce value or create obstacles in its
flow to the customer. Wastes are any steps that don’t produce value or are steps that create obstacles in the
process flow.

The question arises, how does one map a value stream?


This involves walking backwards through the entire production process, and then drawing the current and future
state maps, finally to develop an action plan for moving from the current state to the future state. Once we have
analyzed the value stream in the organization, the next step is to optimize the flow of value through the value
stream. This involves removing all of the obstacles and bottlenecks or the wastes. Three strategies for creating
flow are organizing people, ensuring quality at the source, and ensuring that equipment is reliable and well
maintained. Establishing pull is about supplying products at the same rate at which the customer demands or
consumes them. The customer should pull the production rather than pushing products out to the customer
based on forecasts. Products or services are then created only once there is a customer demand for them.
Similarly, goods and information are supplied or moved only when they’re required.

The concept of pull builds on the basis of good flow in the value stream and improves it further by reducing
inventory and reducing the time it takes to produce a product, which is known as the cycle time. The aim is to
communicate real-time requirements to trigger production at each step. This synchronizes the production so that
products are made only as needed. To move the organization from a push to a pull system, we level out
production and then use kanbans to signal replenishment, and adjust the supply and delivery logistics. The final
step in the Lean process is to continuously seek perfection. This is done by identifying and incrementally
modifying practices that can be improved. A useful tool in implementing continuous improvement is the plan-do-
check-act cycle, or PDCA cycle for short

Lean Tools in Six Sigma

5S methodology
The primary aim of 5S methodology is continuous improvement of the general work environment which includes
both mental and physical work. The 5S methodology was developed in Japan with each “S” standing for different
Japanese terms.


Sort (Seiri): The first S was translated from the original Japanese word “Seiri,” which is the sort aspect of the
methodology that involves separating required items from the non-required items, eliminating the unnecessary ones, and
clearing out the clutter.

Straighten (Seiton): The second S again translated from the original Japanese word “Seiton,” the straighten aspect of the
methodology involves arranging and organizing the necessary items remaining after the sort stage and setting everything
in order. A commonly used phrase for this is “a place for everything and everything in its place.”

Shine (Seiso): The third S was translated from the original word “Seiso,” the shine aspect of the methodology involves
cleaning the work area, removing trash and defining the standards of cleanliness to adhere to. It also includes repairing
any broken machinery.

Standardize (Seiketsu): The fourth S was translated from the original Japanese word “Seiketsu,” the standardize aspect
of the methodology involves maintaining the clean work environment by setting a regular cleaning and maintenance
schedule. This is a step where the previous three S’s are standardized.

Sustain (Shitsuke): Lastly, S translated from the original Japanese word “Shitsuke,” the sustain aspect of the
methodology involves maintaining the 5S approach to work, ensuring that the method develops deep roots in the
organization and establishes 5S as a normal way of doing business.

Benefits of 5S Approach
The primary benefits of applying the 5S approach are –

•Improved work efficiency


•Reduced wastes
•Increased speed
•Employees are empowered to take control of their work environment, leading to an improved employee morale.
•Workplace safety often improves as well because the work area is cleaner and tidier, and the chance for accidents is
reduced.
•Cleaner work area also means that fewer mistakes are made which results in improved quality as a final output has fewer
defects.
Just-In-Time (JIT)
Just-in-time, or JIT, is the production and materials requirements planning methodology. It’s an important tool in
the implementation of Lean systems. JIT is used to control inventory and the flow of materials or products. The
idea is that materials are delivered just as they are needed for the next production step in the process. JIT can
also be applied to documents or information in non-manufacturing environments.

Features of JIT Methodology


•In a JIT environment, materials arrive at the exact time and place they are to be used with no waiting or storage needed.
•JIT aims to reduce on-hand inventory, if not eliminate it completely, as materials enter the next phase of production
immediately. Therefore in non-manufacturing situations work flows more smoothly between departments and there is no
waiting around for vital information.
•JIT reduces waste by allowing a process to produce items as they are needed, rather than storing large stocks of material
and finished goods.
•JIT helps save in terms of inventory holding and handling costs.

Just-in-time environment is tightly controlled, regulated, and coordinated for it to be successful. Some of the
common features of a JIT environment are –

•Demand triggers production that assists in implementing the Lean concept of pull.
•Workers are skilled in several areas so that they can help with different process aspects as needed.
•Lead time for production or completion of work is reduced.
•Potential suppliers are cautiously monitored to ensure they are reliable.
•Related process elements or departments remain in close physical proximity to one another to improve communication.

Kanban System
Kanban is a Japanese term which means ‘signal’. Kanban is primarily an inventory control system that specifies
when material or stock is needed by a process, and tells an upstream supplier to send material downstream. Now
these upstream suppliers may not necessarily need to be external but may be internal coworkers or a preceding
station in an assembly line. Originally, the kanban system was implemented as a manual system with the help of
visual cues like cards attached to storage bins. Some of the more modern system uses electronic notices that
are passed between departments until the supplier is notified of the need.

Ideally, the kanban system is a pull system, meaning it pulls materials and stocks into the process rather than
waiting for a scheduled time when materials and stocks are pushed forward into the process. In a kanban
environment, manufacturing only begins when there is a signal to manufacture. This is in contrast to a push
manufacturing environment where production is ongoing. The kanban system comes along numerous
advantages and disadvantages.

Some of the advantages of the pull type kanban system are,

•It avoids overproduction by regulating the amount of raw materials available.


•It improves the overall quality and inventory control
•It also reduces setup and manufacture time.

Some of the disadvantages of the kanban system are that,

•No inventory buffer exists to carry production over in the event of non-delivery and there are potential bottlenecks.
•System can fail if organizations become dependent on the supplier’s ability to deliver on time, and when there’s no
provision for unforeseen events.

Poka-yoke System
Another Lean tool used is Poka-yoke or mistake proofing. Mistake proofing is an analytic approach that involves
examining a process to uncover where human errors could occur. Ideally, potential errors are traced back to their
source, and their potential is reduced by using a poka-yoke device. Poka-yoke devices are any device that
prevents inadvertent mistakes that result in defects. These devices are usually very simple and inexpensive. For
instance the connector of a computer keyboard is a specific shape to prevent it from being incorrectly connected
thus avoiding potential damage to the computer.

Kaizen
Kaizen is a Japanese terminology which means continual process improvement. Kaizen is an important Lean tool
and involves constantly improving a process through incremental steps rather than through a leap caused by
reengineering or redesigning a process. This process of improvement can take up to six months to bring about
the changes to a process. With a kaizen blitz, such changes can be forced to happen quickly. Note, kaizen blitz is
also typically called a kaizen event. It is an intense process that usually lasts about a week. During a kaizen blitz a
multi-disciplinary team spends time learning Lean techniques, focusing on a process, deciding potential
improvements, and then implementing these improvements to improve the process.

Applications of Lean Six Sigma in Service


Primarily the toolkit for Lean and Six Sigma originated on the shop floor to improve manufacturing processes. In
the due course people involved with marketing, sales, and finance started using some of these tools and
techniques, and found that the tools were just as effective in improving service processes as they were with
manufacturing processes.

With this more and more service organizations started reaping the benefits and sharing their success story, such
that these methodologies gained more and more popularity in a variety of service organizations. At present
almost an equal number of service organizations, including hospitals, hotels, banks, software, and IT companies
among many others, use Lean Six Sigma to improve their organizations, and build for future growth. Services
have four unique characteristics that make them different from manufacturing processes

•Service is often an action or an event.


•Service is not something that can be held in one’s hands. For instance, a doctor gives a physical examination or an
insurance agent discusses a claim over the phone.
•Services cannot be produced and stored for future use. The tasks of a desk clerk when we check in at a hotel are a part of
service. Such that the outputs of services can’t be stored and must be consumed on the spot.
•Outputs of a service are variable. For instance a janitor at a hotel provides many different services. Each service is
initiated by a customer request. Each request can be different and can result in many different outputs. Because services
are perishable the output of a service is consumed at the same time or shortly after it is produced.

When a service organization decides upon to improve its processes and work flow by adopting Lean Six Sigma,
the key activities involves defining what quality means for services. The key elements of delivering customer
value in a service organization are excellence in each and every aspect of service quality. Ideally, applying Lean
Six Sigma in a service organization involves identifying those costs and procedures that contribute to improving
customer satisfaction.

In general, there are primarily three areas that make up quality as a customer perceives it in service
organizations.

•First the organization must define what its product is in terms of the value that is delivered to the customer. That is the
service product which is equivalent to the product created by a manufacturing operation. For instance – In healthcare
service, the service product includes diagnosis and treatment. In transportation, the service product is a trip to a
destination plus the porters, flight attendants, baggage handlers, and beverages. Therefore the elements that comprise a
service product must be carefully defined. We know that a five-star restaurant and a fast food diner both aim to provide a
dining experience, however, the factors that comprise their service products are very different.
•Secondly, after defining a service product, the organization must define how that product will be delivered to customers.
This is a service delivery process. Similar to manufacturing, a service process generally contains a number of steps. Let us
say for example, the service product of an insurance company is an insurance policy. The service delivery process
includes all of the steps required to deliver the policy to the customer. These steps include making sales calls, signing the
contract, entering the contract into an IT system, printing the policy, and mailing the contract to the customer.
•Third aspect that a service organization must define the nature of the customer provider interaction. For instance, when
we go to the local supermarket, the checkout clerk greets us, scans the items, processes the coupons, makes change, and
sometimes bags the items. Similar kind of interaction happens in every retail establishment. In contrast to manufacturing,
all service organizations have a direct interaction with their customer. The customer-provider interaction has an enormous
impact on how the customers feel about the organization.

CASE STUDY
We shall now look at example of Lean Six Sigma in the service industry using the example of an online mail
ordering system. The Accounting Department and external customers of an online mail order company are
complaining about the organization’s invoicing system. It takes much longer than it should to complete and
submit an invoice. Also the customers find the invoices confusing, and often have to call the Accounting
Department for clarification. A preliminary investigation indicates that in addition to other problematic issues,
there is no standard operating procedure for filling out the invoice. Management decides to take immediate
action. The cost of poor quality, or COPQ, is used to justify the project financially.

It was then estimated that it costs the company $400,000 per year plus the loss of customer goodwill. A goal was
established to reduce the invoice processing time by 50%. Management decides to use Six Sigma for its
structured DMAIC problem solving methodology and rigorous toolkit and statistics in order to control the
improvements. Management also decides to pull in Lean tools and techniques as needed within the DMAIC
structure.

During the Define phase, the project team identifies the parameters of the problem and determines how to
conduct the project. The team uses the project charter first. The project goal, scope, budget, personnel, resources
and the problem to be resolved are documented in the project charter. A SIPOC diagram is drawn to learn all of
the relevant elements of the project including the suppliers, inputs, processes, outputs, and customers. The team
members interview employees and external customers to learn what level of service they expect from the help
desk. In addition to phone interviews the team members also send out surveys.

During the Measure phase, the team collects data about cycle time and help desk processes. The cycle time is
found to be 167 minutes with a standard deviation of 81 minutes. The team also creates a detailed process map.
The inputs and outputs in each step are identified, and the team determines whether steps are controllable or not.
Steps that are controllable can be improved. Next during the Analyze phase, the team analyzes each step on the
process map. The team also brainstorms intensively, and creates the cause and effect diagram to identify all of
the potential causes of long invoice processing times. The team concludes that the root causes of longer than
accepted processing times are the lack of a documented operating procedure, and lack of expertise among the
technical helpdesk personnel. The team adopts a multi-pronged approach during the Improve phase. The team
recommends to the organization that they develop a standard operating procedure to help deliver the services
quickly and effectively. They also recommend establishing a requirement that anyone wanting to work on the
help desk must have a minimum of two years of technical experience. And the final recommendation is to
monitor the help desk semi-annually to ensure that employees are staying current with changes in technology.

During the Control phase, the last phase of the Six Sigma project, the team creates a checklist for internal
strategic management to use. The team also creates a control chart to monitor service processing time over the
long term. The Lean Six Sigma project that ran slightly over four months helped reduce the processing time from
167 minutes to 80 minutes. This also resulted in a total cost savings of $300,000. While the Lean Six Sigma
project ran slightly over four months, they were able to reduce the invoice processing time by 50%. The team used
the Six Sigma DMAIC methodology in addition to SIPOC process maps, statistical tools, and control charts. This
also resulted in a cost savings of $300,000.
Application of Lean Six Sigma in Manufacturing
The sole objective of all manufacturing organizations is to make good quality products as efficiently as possible
for customers, they all have a number of processes in common that can benefit from Lean Six Sigma
improvement. For instance,

•Improvement in quality assurance processes reduces the number of product defects, customer complaints and claims, and
also the amount of rework and scrap.
•Operations can be improved to reduce lead times, late orders, average cycle time per order, instances of emergency
maintenance, and scrap and rework.
•Purchasing processes can be improved to reduce the cost per invoice and purchasing errors, and consolidate the number
of suppliers.
•Distribution processes can be improved to consolidate the number of shipments, reduce freight charges, reduce the level
of returned products, and eliminate late deliveries.

CASE STUDY
We shall now discuss how a large playground equipment manufacturer used Lean Six Sigma tools to resolve
problems and meet organizational objectives and improvement goals. With an intent on maintaining its
competitive advantage and increasing profitability by at least 3%, the company launched the Lean initiative.
Management was intent on reducing lead time, improving efficiency, and eliminating scrap and rework.

Thereafter an outside consulting group was hired to provide training in Lean thinking and techniques. The
initiative began by targeting low-hanging fruit first. This started the ball rolling and got everyone on board with the
practice of continuous improvement. Eventually a full range of Lean tools and techniques were used including
value stream mapping, kaizen blitz and the plan-do-check-act cycle. After two years, the company cut lead time
by 60%, improved productivity by 22%, and decreased scrap from 0.8% to 0.6%. The company also achieved a
just-in-time approach to inventory management and affected a culture change throughout the organization. The
employees became highly motivated by continuous improvement and they expressed pride in their work and their
company. However, in the process of clearing out the low-hanging fruit led to a deeper, more persistent problem.
Pinholes were discovered in the PVC coating on the pipes used to make the playground equipment. The pinholes
created a large amount of scrap and generated returns from unhappy customers who received the defective
equipment.

To resolve the problem, management turned to Six Sigma tools and techniques. A cross-functional team was
formed and the team spent time on the shop floor observing the process and talking to the various employees.
They took measurements on temperatures, PVC thickness, and tolerances in order to perform a rigorous analysis
that’s required by Six Sigma.

Detailed process mapping enabled the team to isolate the root cause of the problem to the welding process. The
braziers were given no specifications on how to create their welds. They were expected to use their own
judgment to achieve a good flame on their torches. Lack of a standard operating procedure permitted variation to
enter these processes. To rectify the problem, a standard operating procedure was immediately created to
instruct the braziers on how to achieve the right flame. This reduced variation and also vastly reduced the
instance of pinholes in the PVC.

During its analysis, the team identified another source of variation in the priming process. Operators were
expected to remember which racks of pipes had been sent to the priming baths, but sometimes they forgot. The
team solved this problem easily with a poka-yoke. An indicator light was installed on each rack to show which
racks had been primed and which still needed to be primed. As a result, there was less variation in the process
and scrap was reduced to 0.2% and returns were reduced to 30%. Now let’s take a look at another example. A
factory in the US that produces commercial refrigeration equipment was incurring expensive rework loops,
warranty claims, and most notably widespread customer dissatisfaction due to product failure.
The refrigeration units coming off one of the two production lines were having too many instances of leaks as
reported by customers and the company’s own field salespeople. Having tried everything else, the company’s
management charged the Lean Six Sigma team with addressing this issue to improve the product and restore
customer confidence and satisfaction. The improvement team started with the goal of analyzing and resolving
the problem using a five-day kaizen event. The team used several tools and techniques to identify and analyze the
root cause of the leakage. A Pareto analysis indicated that 80% of the leaks were occurring in the return bend of
the coils. The return bend is a U-shaped piece of tubing that is connected to the straight tube by soldering. The
team decided then to focus on the soldering process. The team then performed a root cause analysis with a
cause and effect or fishbone diagram. The team members listed the materials, methods, machinery,
measurement, manpower, and environmental influences on each step in the soldering process. Next, the team
created a process map and drill down. The team members identified 20 separate steps involved in soldering such
as cutting a piece of tube, installing it and soldering it to the return bend.

Having focused on the soldering process, the team mapped out the steps and spent two days making hundreds
of measurements for each of the 20 steps involved in the process. The measurement data revealed an
astonishing fact. There were dozens of variation in all of the steps, from how far the coils were from each other,
to how much the tubes stick out before being connected to the return bend. However while variation is never
desirable, not all of it contributed to the leaks. So the question remained, which variations produced the leaks?
The team compared the soldering process in each production line. The tubes in the production line producing
units with no leaks stuck out much farther than the tubes on the other production line.

Longer tubes mean more overlap and better coverage of the soldering material on the return bend. This turned
out to be the cause of the leaks. The solution was to increase and standardize this overlap. During the analysis of
the soldering process, the team also observed that employees were not getting feedback on the quality of their
work. Using Lean Six Sigma principles, the team corrected that situation with the feedback process. Defective
joints were returned to workers for correction, which resulted in higher quality at the source. In all, the
improvements resulted in a reduced variation, brought more standardization to the process, and improved
communication among various teams in the soldering process. The result was a 60% reduction in product
defects and an 80% reduction in customer returns. Overall, it was a savings of $5 million from this project alone.

In an organization, each individual values different things, so it becomes difficult to figure out what’s considered
wasteful and what’s useful for the organization. In which case lean gives clear guidelines on this by making it
simple to define what’s valuable for a business, and what constitutes waste.

According to Lean, value is a rating of how well a product or service meets the customer’s requirements. In case
it is something a customer is willing to pay for, it has value. The different activities performed during a process
either add value to the end product, or they don’t. So activities can either be value-added, or non-value-added.


Value-added activities: These are the actions and tasks that make a product more complete. Value-added activities
increase the value of the end product in the eyes of the customer. They make a product or service more valuable from the
customer’s point of view. For instance, connecting components, polishing a completed product, and delivering the product
to a customer are value-added activities.

Non-value-added activities: These are the actions that add no extra value to the end product or service from the
customer’s perspective. These activities include any nonessential actions that the customer doesn’t or isn’t willing to pay
for. For instance transporting components and conducting inspections are nonvalue-added activities.

Value-added activities examples – Slicing bread before offering it for sale, an enterprising baker adds a step that
makes his bread more convenient for customers. Thereby the customers become more willing to pay a little extra
for pre-sliced bread. This concludes that any procedure a customer is willing to pay to add value. It’s likely that
every procedure a company implements seems valuable to someone.

Non-value-added activities Case based examples – For instance a manager may value long thorough hiring
interviews. The interviews may have value for people inside the company, and may also be necessary for hiring
the right people for the company. Conversely the customers may not be willing to pay for these procedures.
Therefore according to a Lean perspective hiring interviews do not add value for the customer.
In order to determine if a procedure or activity adds value, primarily there are three questions which must be
answered.


Does it fulfill a customer’s need or preference?
Example: When bolts are attached a handle to a product, they are fulfilling a customer’s need for a handle on the
product. In which case, customers expect to pay the cost of assembling the product and want handles, so this
procedure adds value.


Does it change the product or service in some way?
Example: Installing a spark plug in an automobile engine or cooking a hamburger makes a physical change in the
item that helps to become a finished product. Similarly, a programmer who writes computer instructions or a
banker who negotiates the interest on a loan are acting in a way that makes a change in a service, even if the
change can’t be seen.

But in an office setting, making and storing extra copies of documents doesn’t change the service provided. So
this activity doesn’t add value.


Is it done right the first time in the process?
In Lean, activities are focused on getting things right the first time. For instance using a poka-yoke, which is a
mechanism that automatically prevents or correct errors as they occur, can add value by improving the quality of
a product for customers. But note that activities that involve finding or correcting mistakes only after they have
been made aren’t value adding activities. For instance, a customer who buys a television won’t find the product
more valuable because the company that built it needed more than one try to get the wiring right.

Also not all activities that do not meet the value-adding criteria should be eliminated. Indeed there are certain
non-value-added activities which are necessary even if they don’t change a product or service, meet customers’
preferences. Some non-value-added activities may be necessary to operate the business. They may be needed to
meet regulatory or accreditation standards that apply to the organization. Such kinds of activities are referred as
‘Required Non-value-added activities’. For instance, writing a production report, paying wages, or even
implementing a Lean initiative are necessary for the efficient running of a business, even though they add no
direct value for the customer.

In general, quality inspections are considered as required non-value-added activities. At least one quality
inspection is typically performed before a product is delivered to the customer as a risk management strategy.
Nonetheless redundant or excessive quality checks do not aim to add value and are not required. Business
support departments such as HR, IT, finance, and legal departments typically provide required non-value-added
services. An organization’s goal should be to minimize the effort needed by these services. Let’s say for example,
the payroll management service ensures that employees receive the right salaries on the exact dates. This
service is necessary for maintaining their workforce. So although it doesn’t specifically add value, it can’t be
eliminated. However an organization can reduce the costs, time, and effort associated with providing the payroll
service.

Primarily by identifying value from the perspective of the customer, a fresh view can be gained as to what it is
that the organization actually does. And then focus on the value-added activities and minimize or remove the
non-value-added activities, and modify or reduce the time associated with the required non-value-added activities,
so that they use fewer resources. So let us take a look at an example. An insurance company wants to eliminate
waste and add value to its processes and procedures. For the company, an example of a value-added activity is
creating modular packages to meet the different needs of its customers. Making hard copy backups of contracts
and other documents and having employees walk into the file rooms to retrieve these documents is non-value-
adding. Nevertheless, maintaining the document copies is regarded as a necessary precaution for the business.
In order to minimize the effort expended on backing up data, the company automates the process of copying
electronic versions of files. The file backups are then instantly sent to an Internet-based storage hold and
employees can access the data copies from their computers.

Lean’s Seven Wastes


The primary goal of Six Sigma professionals working on improvement projects is to recognize value-added
activities from required non-value-added or non-value-added activities.

First type of waste – Non-value-added activities


Non-value-added activities or processes are considered waste. Waste is also referred by the Japanese term
“muda”, and it’s defined as any activity that do not add value to a process or product, but it still adds cost.
Essentially, waste is anything that is not essential to the manufacture of a product or delivery of a service. Taiichi
Ohno, the founder of the Toyota Production System, identified seven forms of waste that may be presented as
overproduction, extra processing, motion, waiting, transportation, inventory, and defects. Under Lean Six Sigma
projects, identifying and removing wastes could be very useful especially during the Improve stage of the DMAIC
methodology.

Second type of waste – Overproduction


Overproduction is the creation of more products, services or components than either the operator or the next
stage in the production process or the external customer need. If too many items are produced by one step in a
process, they will be left in storage until the next step in the process is ready to handle them. But this moves
against the Lean principle of creating a just-in-time environment where items arrive just as they are needed for
the next step in the process. Stored work in progress costs the company money. Yet it does not help it in any way
until the items are completed and the customer pays for them. When the number of completed items exceeds the
number required by the customer, they must be stored until such a time as another order is placed, creating a
demand.

This could be any time and items sitting in storage in this manner don’t make money for an organization. But they
do tie up funds and halt cash flow because the raw materials that were used for this production have already
been paid for. And this is a cash outlay, yet no cash has been received in return for the completed product. So in
order to avoid overproduction, managers need good metrics to evaluate the consumer market, and process
capability so that they can match production to demand. Extra processing is applying more processes than
needed to create a product, service, or a component. Typically this includes using overly large or complex
equipment or including reworking in a process. The process includes any additional steps in the manufacturing
process.

Third type of waste – Process and Motion waste


Process waste can also include all of the overly complex designs that add too much value to a product that
customers are not willing to pay for. Some of the examples of process waste include extra setup procedures for
machinery, inspection steps, handling during packaging or delivery, and documentation.

Motion wastes involve people, information, or equipment that make unnecessary motion due to workspace laws,
ergonomic issues or searching for misplaced items.

Illustration: For instance let us consider a workspace where workers must walk 50 feet to a central raw material
storage area to get screws before they could begin manufacturing the sunglasses that the factory produces. This
adds additional time to the process which indeed costs money. If the screws were within the arm’s reach of the
worker’s workbench, each pair of sunglasses would be completed faster and thereby waste would be eliminated.
Also unnecessary motion also includes excessive bending, stretching, and reaching for tools or materials. This is
an indication of a poorly designed workspace which could be redesigned eliminating the unnecessary motion
waste.

Fourth type of waste – Delays or Waiting Time


Also in some cases equipment or employees may waste time, waiting for another process to be completed
before performing a task. Usually, waiting is caused by unrealistic or badly planned scheduling and process
delays. Some of the reasons for delays include holdups due to delivery problems and downtime; delays can also
include process and design changes. Employees may have to wait for raw materials to be delivered before work
could be started; and inspection to be carried out before working can continue; information to be relayed or
reports to be compiled; or machines to complete a production cycle.

Fifth type of waste – Transportation


Transportation is a movement of a product or its components. All transportation other than delivery of a product
to the customer is considered waste.

Illustration: For instance, consider a factory where items are manufactured in one area. After which they are
moved with a conveyor belt to an inspection area, and then moved again using a forklift to a storage area where
they are left until they are ready for shipping. In this case, moving the items to a different area for inspection is
unnecessary because inspection could be performed as items come off the production line. Moving completed
goods to a storage area before packing and shipping is also unnecessary. The products could simply be moved
directly from inspection at the production line to the packing area. The additional movement adds cost to the
products and increases production time, but adds no value.

Sixth type of waste – Unnecessary Inventory


Inventory refers to the material that is not yet needed and must be stored. Note that unnecessary inventory
means any items that must be stored including raw materials, work in progress, and finished goods. Inventory
may originate from suppliers, or it could be the result of over production. Indeed inventory requires space for
storage and the cost of storage and handling associated with unnecessary inventory adds to the manufacturing
cost of the end product but adds no additional value to the product by these activities. Stored inventory may end
up costing an organization even more if it becomes damaged while in storage. In which case, the entire
manufacturing cost of the damaged items is lost.

Seventh type of waste – Defects


Also defects are flaws in a product or service that causes the product or a part of it either to be scrapped or
discarded or reworked, and this represents waste. When an item is defective or it is rejected for whatever reason,
the entire item may have to be scrapped or the defective part may have to be sent for repair. And if the item is
scrapped as defected, then all of the resources invested in that item are wasted with no gain. The cost of the raw
materials, labor, and transportation involved must be carried by the organization with no way to recover them
from the customers. If the item is repaired, then we add the cost of a new part and the costs involved in a rework
to the cost of the completed items. Where the customers do not pay for these additional costs i.e., they only pay
for the value of the item as if it was produced correctly the first time. Neither of such actions of scrapping or
rework adds value to the end product, but only adds cost.

Eighth type of waste – Underutilized skills set


An additional kind of waste that has been added to represent are the non-utilized or underutilized skills and
talents of employees. With the addition of this eighth type of waste, there’s an easy acronym to remember which
stands for downtime. Here, downtime represents defects, overproduction, waiting, non-utilized talent,
transportation, inventory, motion, and extra-processing.
Value stream analysis
Value stream can be defined as the flow of materials, service, and information that bring a product or service to
customers from start to finish. The process of value streaming includes suppliers, production operations, the final
customer, and all the activities in between. In other words, it is the way that value is delivered from the source or
starting point through to deliver to the customer. Such that any obstacles or wastes such as wasted time,
unnecessary motion, or excess inventory aims to disturb the flow of value in a value stream. For this a clear
picture of a full value stream is required before determining  where waste is occurring or before we can plan how
the flow of value to the customer can be made more efficient. One of the most useful tools for attaining this
picture is a ‘value stream map’. So instead of focusing on an individual process, value stream maps show the
entire value stream of activities that go into producing value for the customer. Although value stream map was
originally used in manufacturing situations, it is equally useful in service situations for identifying waste. Services
have the same sort of value stream that provide a finished product to a customer. The value stream map provides
on one sheet of paper the flow of information, and the flow of the products.

The course of building a value stream map starts with the focus of the customer and then works backwards
through the production processes and then to the suppliers. This also includes the information flow that’s
captured from the central planning system to the customers, and the suppliers and each individual step within the
process.

The process value stream mapping and analysis involves the creation of two maps.


Current State Value Stream Map: It is a visual representation of the process in the current scenario. This map provides
a starting point or a baseline for identifying wastes, and its causes. This map reflects the current state of the system

Future Stream Value Map: Second map reflects the future state. The future state value stream map represents the
targeted state of the process, or the state once improvements have been implemented. The future state map also highlights
areas in the process where improvement initiatives are required, and what flows should be altered to create a leaner
production and information flow.
Here, each value stream map is a graphic chart of the processes and the activities that produce the product as it
goes through its entire creation. Both the maps demonstrate the production flow and the information flow.

Steps in conducting a Value Stream Analysis


In order to be meaningful, a value stream map should be created for a single related group of products.

STEP 1 – Define the Product Family


In step one the product family is defined. Product families can be identified by the similar processing steps and
equipment that the products pass through as they move downstream in the manufacturing or service system. It
is important to clearly define and document all of the details of the product family in step one. This includes what
the related products are, the demand for the products, and the rate of sales. A convenient tool for determining
which products are related is the Product and Equipment Matrix. This type of matrix plots each product against
the processes and equipment that passes through, so we can determine which products go through similar
processes in the system. For the sample manufacturing process, the first and the fifth items receive very similar
treatment in the production processes. They both go through clean, polish, press and trim so they probably
belong in the same product family. A Pareto chart can also be used to choose the product family. Products with
the highest production volume could be chosen for the value stream map. Process operators and line managers
are the ideal people to help define the product family. Also a cross-functional team can provide valuable insight
into the product family selection.

STEP 2 – Create a Current State Map


Post deciding on the product family to map in step one of the process, the second step involves drawing a
current state map. Ideally this starts with a walk-through of the process by the improvement team to collect
current state information. This means physically tracing the path of both the material flow and the information
flow of the entire value stream which provides an important sense of its sequence and flow. It is suggested that
the team should work backwards through the value stream as this keeps the emphasis on what the customer’s
value.

So, beginning at the final link to the customer and backtracking through the system allows the team to consider
how each process step adds value or creates customer value. It can be very useful to have every team member
draw an individual map of the current state using the standard icons to convey key information. And then
afterwards we can integrate each of these maps into a single current state map. There are different types of data
that can be collected for a value stream map which usually include cycle time, net operating time, and work in
process. Other type of data may also be collected such as lead time which is the total time it takes for one unit to
complete the process. Now, this includes both value-added time and non-value-added time. Throughput time is
the time it takes for a unit to be completed by the operation from start to finish.

Changeover time is the time it takes to switch production from one type of product to another. Uptime is the
actual machine time available to run divided by the time scheduled to run. Work time is the available time for
period minus any breaks or cleanup. For example, if a worker has an eight-hour shift and they have one hour for
lunch, and two fifteen-minute breaks the work time for the shift is six and a half hours. Queue time is the time that
work has to wait before being acted upon.

Even though value stream mapping icons are not standardized, some of the accepted ways to portray the
common elements and activities in a process are –


Inventory – It is represented with a triangular icon with the letter I in the center. So while mapping the current state the
level of inventory can be approximated with a quick count. And that number can be placed beneath the symbol.

Electronic flow – It is represented by the jagged arrow icon. This symbolizes the electronic flow, or exchange of
information, across such platforms as the Internet, intranets, local area networks and wide area networks. One could also
include the frequency of interchange, type of media used and the type of data shared.

Manual information flow – This is represented by a straight arrow. This can include memos, schedules, conversations
and reports. This can also include the frequency of the interchange.

Process– A process is represented by a rectangle with a horizontal line near the top. A process could be an operation,
machine, or department through which material flows.

Source – A source is represented by a rectangle icon with a jagged top. This symbolizes the usual starting and ending
points of the material flow. The supplier is the usual starting point for the process flow, and the customer is the usual
endpoint.

Buffer stock – This is represented by a stack of three vertical boxes. This icon represents a safety net of stock which is
sometimes referred to as a supermarket, and it’s used to protect against such things as fluctuations and custom orders or
system failures.

Pushing items: Push arrow icon is a striped, straight, horizontal arrow that represents the pushing of material from one
process to the next. It means that something is produced regardless of the immediate needs of the downstream process.

CASE STUDY
Linto Pvt Ltd. is an auto parts manufacturer asked its Six Sigma team to conduct a value stream analysis of its
front and rear bumper manufacturing process. After defining the product family in step one, the team members
are in the process of creating a current state map. They start with the source symbols.
In which case, the supplier is placed on the left which is the usual starting point for material flow. The customer is
placed on the right which is the usual endpoint for material flow. There are five main processes defined –
Fabrication, Molding, Machining, Painting, and Inspection, and these are represented by the rectangular process
boxes. These boxes contain details about the cycle time, changeover time, and uptime.

For instance, the fabrication process has a cycle time of 5 minutes, a changeover time of 45 minutes, and an
uptime of 90%. The team then adds icons for inventory locations next to each process box. Small boxes are also
used to add details about the inventory type and the quantity that are found in each inventory location.

For instance, the inventory icons in each side of the fabrication process tell we that the inventory delivery takes
place between the supplier and the fabrication process. Such that 4400 items of inventory are passed from the
fabrication process to the molding process. Additional icons representing movement of raw materials to the
manufacturer and finished goods to the customer are added in the form of trucks.

The information flow beginning at the production planning and control and running down to the processes
through the production manager is represented by the manual information arrows between icons. The jagged
arrows between processes, suppliers’ sources, and customer sources indicate that the information flow between
these entities is electronic. Putting the lead time ladder along the bottom of the map is a final step. To get the
total production lead time, the team takes the inventory number and divides it by the daily customer
requirements. In the ladder, the peaks represent the lead time between each process; in this case 1 week, 4
weeks, 5 weeks, 2 weeks, 1 week, and 2 weeks. And the troughs represent the value-added time associated with
each process: 5 minutes, 8 minutes, 12 minutes, 14 minutes, and 6 minutes. Then we add the values for both of
these and this gives us a total lead time of 15 weeks and a value-added time of 45 minutes.

STEP 3 – Create a future state


Now the third step of the value stream mapping process involves drawing a future state map. In which case, the
information used to create the current state map is combined with any insight gained from the physical
evaluation of all activities. Together with the knowledge about Lean improvement methodologies, these form the
basis for imagining and designing an improved future state map for the system. In order to create a future state
map, the six sigma team uses various Lean improvement tools such as pull, takt time, kanban, setup reduction,
and total preventive maintenance. The process of creating future state map is where the team identifies creative
solutions for all of the identified issues. The future state map demonstrates specific changes that are required to
the current state to achieve the predicted future state of the value stream. The areas where kaizen events would
improve the process are noted, and potential areas for improvement are circled on the map.

Some of the few additional icons used commonly in the future state map.

•Pull arrows: These are a major change from the current state map where the flow is indicated by material flow arrows
and push arrows.
•Kaizen bursts: These are represented by gold bursts. The signal kanban is typically used to represent the withdrawal from
a supermarket.
•Kanban post: It is a location where the kanban cards are kept and the production kanban is a signal to produce more.

The Six Sigma team maps the future state of the value stream for the automobile manufacturing process. The
results of the analysis of the value stream map showed several opportunities for improvement, such as
combining processes to create a better flow and eliminating batching of inventory between processes. We shall
now take a look at some of the elements of the future state map and what they mean in relation to this example.

ILLUSTRATION
Let us consider the case that two kaizen bursts were targeted, a design kanban and a design work cell kanban.
The work cell was created for the fabrication molding and machining operations. Two bottleneck processes were
also identified, machining and painting. The cycle time related to these processes is eight weeks and a goal is set
to reduce this. In the future state, the flow of products between suppliers, the manufacturing process, and the
customers will be based on the pull of customer demand. An inventory supermarket was used at the material
receiving starting and ending points to reduce the total inventory and limit the overproduction. In addition, a pull
method was used between the sub processes for this purpose.

A kanban post represented by the goalpost symbol was identified between the painting and inspection sub-
processes for the pickup of the in-progress material. And then finally, signal kanbans represented by the inverted
triangles were used next to the inventory supermarkets to signal a trigger or a minimum point whenever the
inventory levels in a supermarket between the two processes drops. The cycle time was estimated to reduce by
33% and the lead time was estimated to reduce by 31%. The future state map is also expected to reduce the
inventory substantially. More sophisticated forecasting, faster reporting, improved communication, and reduced
inventory and monthly blanket orders cut the lead time and cycle time even further. The requirement for visual
inspection was eliminated by using automated inspection, and the symbol was removed from the future state
map.

STEP 4 – Plan Implementation


The final step in the value stream mapping process is the plan implementation which includes developing a step-
by-step plan. When developing the plan, we use the value stream map to highlight the goals and objectives for the
team. The team should include those working with the relevant processes to ensure buy-in. The plan also
highlights the types of interventions needed. For instance, these might be just-do-it fixes that could be
implemented in a day and kaizen blitzes that take 2 to 5 days or even longer-term projects. The second step is to
present the plan to management and then secure the necessary resources, and finally executing the improvement
plan to reach the future state.

Introduction to DFSS
Indeed Six Sigma and Design for Six Sigma are extremely powerful tools, but each has a slightly different focus.
The process of Six Sigma uses the Define, Measure, Analyze, Improve, and Control, or the DMAIC methodology.
This method is typically used when we already have a process in place, but the issue is in the variation or defects,
and we are trying to improve that process. On the other hand, Design for Six Sigma, is used more for the upfront
planning. Design for Six Sigma is used before we have a process in place and when we are trying to make
improvements prior to starting the production process.

From experience, Six Sigma typically only gets so much in terms of improvement with an existing process.
Therefore we are only going to reap so much in terms of process improvements. Research has shown that with
using Six Sigma principles, one can typically get about a 4.5 sigma process. And after that point, we hit a brick
wall and that’s where we have to return back and redesign the product, or the processes. This is where Design for
Six Sigma comes into action.

Design for Six Sigma (DFSS) is a quality management approach which is used to design, or redesign, a product or
a service. Design for Six Sigma (DFSS) uses many of the principles of Six Sigma and it uses a very similar
philosophy, but at present we are using it to do all of the upfront design work when we are developing these
products or processes, or services. So by using Design for Six Sigma for the upfront work we are being proactive
by choosing to achieve a Six Sigma level from the very beginning. Another key aspect of Design for Six Sigma is
that as we are designing these new processes, products, or services, or we are redesigning them, we are taking
into account our customers’ requirements and expectations. And we are using this information throughout every
step of the design process to ensure that our products, processes, or services meet our customers’ expectations.
By doing this, we can make sure that our organization delivers exactly what the customer wants and we can
meet, or even exceed, the customers’ expectations. Studies have shown that approximately two-thirds of a
product’s costs come from the design of the product or the service. If we go through and understand really what
the customer wants, and we design it right the first time using tools such as Design for Six Sigma, we can
improve the benefits to our customers.
Benefits of Design for Six Sigma
•Design for Six Sigma helps to greatly reduce errors or defects as the planning of the process is done with Six Sigma
philosophy in mind.
•Design for Six Sigma helps to reduce production costs and increases customer satisfaction as it is assured that the
company meets the customers’ requirement, such that these are taken into account at every step of the way while
redesigning the product, or processes, or services. Therefore, Design for Six Sigma helps to generate the same type of
benefits of the Six Sigma DMAIC methodology.
•Design for Six Sigma also assists in supporting the larger Six Sigma goals as it uses Design for Six Sigma before we
even start producing our product, or before we start designing our product, which further helps to reach the long-term
goals for Six Sigma.
•Design for Six Sigma also helps to build to the Six Sigma quality levels. Also, since we are taking that mindset into
account and designing our new products or services based on the customers’ expectations, and designing them in such a
way that we’re able to reduce the variation.
•Design for Six Sigma is very helpful in new product and service development as it is used to redesign, or to develop a
whole new product or service.

Primarily there are two types of applications for Design for Six Sigma.

•First type of Design for Six Sigma application is with the product or service – and this is when the organization is
developing a new type of product or service that has never been offered before. So while we are doing this, we are
building upon existing knowledge of how to build a product or a service that would be previously offered.
•The second type of Design for Six Sigma application is a Process DFSS. This is while developing a new process so we
are not currently manufacturing or producing that product instead it is a brand-new process. The designing of the process
must be done in such a way that it account for variability, and focusing on reducing that variability. Indeed, Design for Six
Sigma and New Product Development is both very closely related, however Design for Six Sigma is not meant to replace
New Product Development. It is in fact meant to enhance the New Product Development process.

The five key steps with the New Product Development process are –


Concept Study – Before we move any further with a new product idea, ensure to determine if there are any unknowns
about the market, technologies, and processes that are associated with the idea.

Feasibility Investigation – This step involves determining if the issues found in the Concept Study are resolvable and
what limitations might exist.

Development – In the third stage the focus is on the development which requires establishing the specifications and the
needs of the customer. It is required to determine the target markets, and use a cross-functional team to set up tollgate
stages.

Maintenance – This stage includes all of the post-delivery tasks associated with product delivery.

Continuous Learning – Final stage, involves preparing the project reports and evaluations used to ensure that the teams
learn from both their successes and their failures.
In this case, Design for Six Sigma incorporates several additional tools.


Failure Modes and Effects Analysis (FMEA) – FMEA is a structured method for analyzing potential failures and their
effects, and how these might influence design parameters.

Quality Function Deployment (QFD) – QFQ helps to provide an organization with a clear understanding of what the
customers’ needs actually are. This helps to translate the voice of the customer into the technical requirements to generate
the best possible products, services, and processes.

Design of Experiments – It is a structured methodology that enables teams to design their experiments and analyze the
results of these experiments. The primary area of focus of Design of Experiments involves understanding and controlling
variation of the key process inputs, and thereby uses that information to improve the results of the project outputs. Design
for Six Sigma (DFSS) uses these experiments to validate and discover the relationships between the inputs and the
outputs of a process.

Robust Design Optimization – This helps to integrate experiments early in the development process. This really helps
the team to discover the optimal solutions to meet the needs of the customer. And these solutions are typically strong and
very adaptable so that the team can accommodate changing focuses and enduring problems.
Condition for using DFSS
An organization must decide before to roll out Design for Six Sigma, the four key considerations that really need
to be attended.

•The first thing that needs to be the understood is the organization’s current sigma level and the trends within the
organization, and the business environment. This means that if the sigma level is rising fairly steadily, then it might be
unlikely that the organization really needs to focus on new designs. However, if there is a rise in the sigma levels i.e., they
are starting to slow significantly then this means we have reached the capacity with the existing processes. As per
research when we reach a level of about 4.5 sigma, then we have probably reached a point where we need to go back and
start redesigning the products and services, and going after harder process improvements. It is equally crucial to
understand the level of change in the market demand, the customer requirements, and the technology within the
organization.
•Second we must be able to manage the organization’s business environment. That is if we have changes in market
demand, customer requirements, or technologies, these might render the products or services obsolete in the near future.
Therefore, if we have situations like this that are hindering the business’ growth, Design for Six Sigma could help the
organization adapt to these changes.
•Third, we need to look at the organization’s prioritized project schedule. While going through process improvement
projects, one need to plan accordingly and prioritize this list based on each project’s complexity. In general, it is simpler
to go after those simple projects and low-hanging fruit projects first. And then prepare the organization so that they’re
ready to roll out Design for Six Sigma for the more complex projects.

For this, it is very important to understand the organization’s capacity to roll out Design for Six Sigma. We are
required to consider how the organization is currently handling improvement projects. One must understand the
resources available and whether the teams and management can become more involved in the design. We now
take a look at some of the situations that are best suited for Design for Six Sigma. We start by having an issue
that we want to address. With this we can look first to see if the product or service currently exists. If it doesn’t
currently exist, then we want to go through and use Design for Six Sigma to design or develop that new product or
service. However, if the product or service does currently exist, then we want to go through and use tools such as
the DMAIC methodology from Six Sigma, Kaizen, or other improvement tools to improve the project initially, and
go after some of the low-hanging fruit.

We would want to go through and determine if the improvement is sufficient enough to yield the desired quality
level, or sigma level. So if the answer is not in favor, that’s when we want to go after and consider using Design
for Six Sigma. This allows us to reach to the next level of improvement gain. If we have reached the desired level
of quality improvement, then we can continue just using DMAIC and other improvement tools to further enhance
the project.

Let us start with some case based examples to understand when to use Design for Six Sigma, and when to use
DMAIC.

•An automobile manufacturer that sees their defect rates from their drive shafts steadily dropping over the last 15 months
since they have implemented the DMAIC methodology. In this case, it makes more sense to continue moving forward
with the DMAIC methodology as they’re heading in the right direction. DMAIC is working well for their organization, so
there’s really little reason to consider redesigning the product.
•The second example is a mobile phone manufacturer that’s decided to launch video phones as a new line of products.
Unfortunately the company has been losing market, and their defect rates are not dropping. In addition, their sigma levels
are not increasing. They have exhausted all of their possible efforts including several DMAIC projects, trying to meet
customer specifications, but their customer satisfaction is still very low. This situation calls for Design for Six Sigma
because it’s a new process and all of their possible efforts previously have not helped enough. In this case, Design for Six
Sigma is a better strategy for designing new products or processes, or for redesigning existing products and processes.
•A new soft drink company planning to enter the market and they want to develop a tastier and more consistent cola than
what’s currently on the market. In this case, Design for Six Sigma is the most appropriate choice because the product and
the business do not yet exist. Any additional cost and problems associated with redesigning, retooling, and reallocating
resources are not a limiting factor at this point because it’s a new company.
•A company is making bulletproof surveillance camera planning to use the Six Sigma methodology three years ago, and
yielded great returns from these projects. The projects resulted in a quieter and smoother rotating device. They also saw
increased pixel quality and a more resilient armor coating to help protect the device. Recently the projects have been
yielding smaller and smaller gains, and that rate of improvement has started flattening out. In this case, Design for Six
Sigma is also more appropriate as the rate of improvement has slowed and almost come to a stop. Design for Six Sigma is
useful when we or the company feel that the process and its outputs are still not at the desired level of quality. So we may
need to go back and redesign the process or product from the ground up.

Similarities between Six Sigma and DFSS


•Both Six Sigma and Design for Six Sigma are quality improvement strategies.
•Both focus on achieving a performance standard of 3.4 defects per million opportunities.
•Both the strategies have their focus on quality and customer requirements.
•Both the strategies require the use of statistical and quality tools.
•They are both very systematic processes that focus on the use of data.
•Both are very data intensive and systematic processes that look for improving the highest quality levels at the lowest cost
to the company.
•Both Six Sigma and Design for Six Sigma are implemented by teams, and the teams typically have different belt levels,
including Green Belts, Black Belts, Master Black Belts, and they both involve Champions. The use of the groups and
teams helps in ensuring that the approach is integrated quickly into the culture of the organization.
•Six Sigma and Design for Six Sigma both result in lower costs. These reduced costs come from reducing cycle times,
reducing inspection requirements, and reducing waste and rework.

Differences between Six Sigma and DFSS

Points of Difference Six Sigma


DFSS
   
Six Sigma focuses on improving
DFSS starts at the design stage
existing products and processes.
by trying to achieve an error-free
by looking for the root causes of
Focus product or service before we’ve
the errors or defects to reduce
even rolled out and started to
the process variation in the
deliver that product or service.
current products, or processes
With DFSS we are trying to
With Six Sigma the primary goal
Goal optimize new designs that don’t
is to improve current processes.
currently exist.
With DFSS we are really trying
With Six Sigma, we are trying to
Purpose to prevent those defects before
remove defects.
they even occur.
With Six Sigma the processes With DFSS the processes are
Process
are predefined. defined by the organization.
Problem Solving Six Sigma solves very specific DFSS identifies problem areas
existing problems using an exact and design workarounds, and
methodology DMAIC. uses two different methodologies
Points of Difference Six Sigma
DFSS
   
namely – IDOV and DMADV
DFSS provides return that might
Six Sigma typically provides
be a little more long-term as it
quicker returns as the aim is to
Return requires designing the new
make improvements to an
product and go through the entire
existing process.
product development cycle.
Six Sigma is very reactive as we DFSS is very proactive as we are
are improving upon current trying to fix problems and
Nature
existing problems and prevent them from happening
approaches. before they even occur.

Illustration -1 
To further illustrate these key differences, we take the case of hospital scenario. Let us consider the case that we
have a team in a hospital that has decided to initiate a Six Sigma project to help improve its current drug ordering
process. The current process has frequent shortages and sometimes excessive inventory. Using the DMAIC
methodology, the team has worked through the process to find flaws and correct them where possible.

The Six Sigma project nearly took five months, and the team expects an annual savings of $1,50,000 per year by
removing these flaws, and the wastes within the process. This example shows that the goal of Six Sigma is to
really improve a current process in reaction to a perceived problem, with an extremely high project-focus.

Illustration -2 
Let us suppose that BLP hospital wants to initiate a new surgical procedure for patients who suffer from trauma.
In which case, the hospital uses DFSS and the IDOV methodology to create a new process and service. The
deadline for the design of the new service has been set as nine months. Once this procedure will be approved,
BLP hospital would be one of the few in the country to provide this service. Indeed it really is a long-term
realization of benefits. In initiating the new surgical procedure, the focus is more on ensuring that the surgical
procedure is error-free, therefore it becomes very proactive as an approach because the hospital is not
responding to errors in the existing procedure, and also it cannot afford to make mistakes when people’s lives are
at risk.

It has been said that Design for Six Sigma (DFSS) is considered a more flexible approach in the Six Sigma
methodology. In Design for Six Sigma there are primarily three different methodologies that are employed.

•IDOV: It stands for Identify, Design, Optimize, and Verify.


•DMADV: It stands for Define, Measure, Analyze, Design, and Verify.
•DMADOV: It stands for Define, Measure, Analyze, Design, Optimize, and Verify.

Now all of these methodologies under DFSS have their foundation in the core of new product development and
they encompass the same overall processes i.e., to Identify, Design, and Verify. The only difference is in their
overall processes and execution which provide for flexibility in these methodologies.

IDOV Methodology
The IDOV methodology stands for Identify, Design, Optimize, and Verify.


Identify Phase: The Identify phase, deals with incorporating the customer requirements into the formal product design by
setting up a cross-functional team, assigning responsibilities to the appropriate people, gathering the voice of the
customer, and then defining what the customer requirements are. Once all the requirements have been identified these
requirements, then we must conduct a competitive analysis and define the critical to quality attributes. Some of the tools
associated with the Identify phase are the team charter, quality function deployment, failure modes and effects analysis,
and benchmarking.

Define Phase: In the Design phase the concept and designs are developed. After all the designs are developed, their
alternatives are evaluated so as to select the best fit concept that best meets the customers’ requirements. After that, based
on the concept design, the raw materials, process, or service scope is initiated. The team focuses on developing the
procurement and development plans. The primary tools associated with the Define phase are Design of Experiments, and
then various statistical tools.

Optimize Phase: The main focus of the Optimize phase is to establish the product’s specifications, service parameters,
and process settings. In this phase, the process capability information is calculated so that the team can project
performance of the current systems, and then optimize the designs as much as possible. With the help of the data, the team
can also begin the process of error proofing into the manufacturing or service processes to reduce variation in costs. In
Optimize phase, statistical tools and capability analysis are used commonly.

Verify Phase: In the last Verify phase, the team starts to focus on conducting tests, and validating the design and the
quality control systems. The most critical aspect is to verify and validate that the team is ready to move into full
production. In the Verify phase, typically product designs are developed to help validate the systems.

ILLUSTRATION
In order to understand how the IDOV methodology of DFSS works we use an example of Radiomarch Pvt.
Ltd. which is a radio equipment company that’s developing a new MP3 player using Design for Six Sigma.
•In the Identify phase, the company organizes a cross-functional team and they assign design responsibilities to the
suitable team members. Some of the responsibilities include – market research and researching the technical implications
of the customer requirements.
•The team then uses this information in the Design phase along with the customer requirements and the technical
considerations to develop a set of product design drawings. Also the design alternatives are evaluated and compared
against each other on the basis of cost, efficiency, schedules, milestones, and available resources. After gathering all the
information, the team selects the best product design drawing that meets the customer requirements.
•In the Optimize phase, the team takes that chosen product design drawing and finalizes product specifications. The
specifications and product design drawings are given to manufacturing, responsible for machinery and equipment setup,
and calibration as well as implementing any error proofing that’s necessary in the production system.
•In the Verify phase, once the production system is set up by manufacturing, then the team conducts a series of tests using
prototypes. In case of any defects that are occurring in the manufacturing processes, these are identified and corrected. In
this phase all the errors are fixed and the process is validated to ensure all of the corrections have been fully implemented.

DMADV Methodology
Design for Six Sigma Methodology of DMADV abbreviated as Define, Measure, Analyze, Design, and Verify. This
methodology is very much similar to the IDOV methodology, except for the fact that the DMADV process is
divided into five steps. Such that the first three steps of DMADV, Define, Measure, and Analyze, are roughly similar
to the Identify phase in the IDOV methodology. We shall now consider each of the steps of the DMADV
methodology in a little more detail.


Define Phase: The Define phase of the DMADV methodology is very much similar to the Identify phase of IDOV. The
main aim in this phase is to establish a team and to determine the requirements from the customer, and then gather the
internal metrics should to make sure that we are meeting those customer requirements. In which case the Project Charter
is created and the goals are set.

Measure Phase: In the Measure phase, the aim of the team is to assess the current customer needs based on the
information gathered in the define phase. After that the team sets the specifications for the product, service, or the process
and begins to formulate multiple design concepts and addresses the customer quality needs, and any other production or
quality needs.

Analyze Phase: The Analyze phase involves analyzing the activities in terms of how they add value. The team need to
ensure that the parameters and scope of the design are reviewed again back to the customer requirements that were
identified so as to meet the requirements of the customer. It is suggested to perform benchmarking analysis and look for
potential best practices.

Design Phase: In the Design phase, the team starts to develop a design, and it needs to make sure at this point that they
are balancing quality, cost, and time. After which each design is tested against the required credible quality measures to
ensure that they meet the customer requirements. But if they don’t, then they are refined as necessary.

Verify Phase: In the final Verify phase, the team begins to test the prototypes against the customer requirements. So the
focus remains on ensuring that the product, service, or process being designed meets those identified customer
requirements. In this phase the team ensures that the controls are in place for each of the processes that are involved.

Similarities between IDOV and DMADV methodologies 


•Both methodologies involve cross-functional teams in the concurrent design, and development of new products, and
services. These teams involve manufacturing, quality, design, sales, and marketing to make sure that we’re getting a good
understanding of what the customer really wants.
•Both IDOV and DMADV methodologies put a key focus on truly understanding the customers’ needs, and specifications
prior to even starting the project.
•The primary steps in both methodologies are to gather the voice of the customer, and really define what those customers’
expectations are.
•Both methodologies rely on early detection and correction of errors.
•Both methodologies have a goal of reaching Six Sigma, so they both focus on designing the products and services right
the first time in order to reduce variation and waste within the processes.
•Both methodologies focus on closing the loop of improving the end product, service, or process during the design phase.
•Both methodologies have the key idea of reducing the waste and the variation within the process to make sure everything
addresses the customer requirements by closing the loop at each step of the process, by taking into account the customer
requirements in each step.

Differentiating IDOV and DMADV

Points of Difference IDOV DMADV


  IDOV is a modified
DMADV is a traditional Design
Type methodology that was developed
for Six Sigma methodology
by General Electric
  The methodology of IDOV
DMADV is Define, Measure,
Phases involves Identify, Design,
Analyze, Design, and Verify
Optimize, and Verify
IDOV is typically more DMADV is often applied to
  applicable for designing a redesigning an existing product
Application completely new product, service, or process, to make sure it meets
or process the level of Six Sigma
IDOV considers cost efficiency
DMADV considers quality of the
Measure evaluation as well as some other
product to measure success
measure to measure success

 DMAIC vs. IDOV and DMADV

Points of Difference DMAIC IDOV  and DMADV


  The focus is on improving The focus is on redesigning a
Focus current products or processes current process or product, or
trying to create new products or
processes
The current process is not
capable of meeting the quality
  The current processes are viewed
needs and the customer
as capable of satisfying the
  customers’ needs and so trying to
requirements. Therefore,
redesigning of the current
Capacity do is improve the current process
processes, or products is required
using statistical tools
to ensure it meets customer
requirements
It is assumed that the current
  With IDOV and DMADV, we
design is satisfactory, and it’s
need to redesign the product or
  capable of meeting the customer
process based on various drivers
needs, so we actually don’t have
Design such as cost or producibility, or
to go back and change the design
quality.
itself.
It is assumed that the current It is considered that we need to
processes are flexible enough to do a major change to our current
meet the customer requirements. process. Also that we are going to
Flexibility
So we just need some be able to take into account
improvements within our current changes in potential customer
process demands and new needs.
The IDOV and DMADV
The methodology, aims at methodologies are more of an
verifying that the process advanced verification as it
  changes are sustained, and they requires ensuring that the
meet the customer requirements. processes are designed to meet
customer expectations.
 
Failure modes and effects analysis (FMEA) is defined as a systematic and proactive approach for identifying and reducing
risk within the products and processes. The FMEA approach is primarily used in Six Sigma and also in Design for Six
Sigma. FMEA is used to identify and really understand the potential problems or failure modes that could occur in the
processes, products, or services. It also helps we to understand what could go wrong in the process and really helps to
identify the causes and their effects in the organization, and most importantly – on the customers.
Once the causes and effects are identified, as a team, we can then assess the risks that are associated with those failure
modes, their causes and their effects, and then prioritize them based on their overall impact. FMEA also helps to really
identify the appropriate corrective action so that we can address the most serious problems in terms of their impact on the
customer or the organization, and prevent them from happening again using tools such as poka-yoke or mistake-proofing.
FMEA is considered as a very useful tool during the beginning of a project as it helps the team to understand the scope and
feasibility of that opportunity within the product or processes, and the types of failures that could go wrong. This helps a
team to really narrow the focus down to that specific type of problem that we are trying to improve.

Differences between DFSS and Six Sigma using FMEA


FMEA is used in both DFSS and Six Sigma to identify potential risks and then take the necessary corrective action. FMEA,
however, is used slightly differently in each methodology.

 
 
Six Sigma
DFSS
 
Design for Six Sigma uses FMEAs to evaluate
FMEA is used in Six Sigma in the Analyze phase
new processes or products before we move
to help analyze the potential defects.
forward with the product development process.
FMEA is used to understand why products or FMEA is used in the Improve phase to fix issues
services might fail so that we can understand the before they occur in a process, product, or a
potential risk with the new product, or process. service.
FMEA helps we understand the effects that the FMEA is primarily used to identify potential
failures have on the customers, or wer own
errors in the process and correct the potential
organization, and we can prioritize those to take
defects for process improvement.
corrective actions.
With Six Sigma, since the process already exists
With Design for Six Sigma, the FMEA is more of
and we’re trying to improve it, it’s more of a
a proactive approach
reactive approach

Benefits of FMEAs
• FMEA can help the team rank the effects of failures by identifying the most important failures, and then the required
actions for that potential failure to help improve the product and process.
• FMEAs help in identifying and ranking customer requirements that are introducing errors. Such that those requirements
can be analyzed for appropriate trade-off to make sure that the improvements make sense while still meeting the customer
requirements.
• FMEAs also document the product, and process knowledge that’s gained through implementing the FMEA. This
documentation includes information such as the potential risk, and this helps with future products and processes, and also
with design and the testing to make sure that the failures are eliminated or caught.
• FMEA focuses on prevention i.e., rather than being a reactive tool, it’s a very proactive tool when designing new products
and processes. The aim here is to ensure that the Six Sigma teams take action before errors happen.
• FMEAs help to reduce development costs by identifying the potential failure modes early in the product development
process. These failure modes lead to extra costs from scrap, rework, late changes, and repairs.
• FMEAs also assist in improving the quality and reliability of the product or process by removing or reducing the effects
that are caused by these potential failure modes from the process of the product.
• FMEA helps to ensure customer safety by preventing safety defects and failures.
• FMEAs are developed by a cross-functional team that helps to facilitate teamwork and communication between different
departments and levels within the organization.

FMEA Types
There are primarily four types of Failure Modes and Effects Analysis or FMEAs

Design FMEA: The Design FMEA is primarily used when designing a new product, or redesigning a product. Design
FMEA is considered to be a very proactive approach for identifying potential design weaknesses prior to launching a new
product. Design FMEAs are used to analyze the component design as they take a systems approach to looking at the
integration of all of the components, the assemblies, and how they fit into the entire design of the product or service. The
primary focus of Design FMEAs is to identify those potential component failures, as failures within the components can
lead to a significant impact on the entire system, and potentially a system failure. It therefore becomes crucial to apply
Design FMEAs, once the potential design has been identified, and before we release any design drawings.

Process FMEA: The Process FMEA is developed for the manufacture, or delivery of a product, or service. Process
FMEAs are primarily used to identify weaknesses or potential errors and risks within the production processes. These are
usually conducted during the production process, and cross-functional teams walk through each step of the production
process to understand what happens if something goes wrong in each step, and the effect on the next step of the process
which would be one customer, and then also the final customer who’s actually using the product. Process FMEA are very
useful in manufacturing, for instance we can analyze the impact.

System FMEA: System FMEAs are beneficial in analyzing system functionality in the initial stages of design. The
Systems FMEAs are incorporated before a specific hardware is determined. The System FMEA helps in identifying
potential failure modes that are associated with the functionality of the system, and which may be caused by the system
design. System FMEAs are critical to use particularly when we have very complex systems as they take into account the
components, and the sub-systems and sub-assemblies, and how a change in one sub-assembly may impact the entire
system and the functionality.

Service FMEA: Service FMEAs are mainly used to analyze services. System FMEAs are used to spot potential failure
modes that are caused by the system or process problems. Service FMEA should be performed before the first service
reaches the customer so that we can identify any potential errors that might have a negative impact on the relationship
with the customer. Service FMEAs are usually performed on non-manufacturing aspects of the business such as financial
or legal services, education, health care, or hospitality.
Steps in performing FMEA Process
There are mainly 10 steps in developing an FMEA process.
1. The first step involves reviewing the FMEA process and then designing the service that we are trying to improve, and
identify all of the potential failure modes. This is generally done by walking through a process, or identifying each design
aspect. For instance, if we consider the case of a fire extinguisher, it may have problems with locking pins, discharge hoses,
or pressure gauges. These are various potential failure modes that could impact the success or failure of the process. If we
think about a service aspect, this could be data entry or incorrect coding that could lead to errors.
2. In the second step of the process we aim to identify the potential effect each failure mode could have on the customer.
Primarily this is the information we are going to be used in the later stage of the process to assign ratings. If we again take
the case of the fire extinguisher, a failure with a pressure gauge could cause an explosion when the customer uses the
extinguisher, or a discharge hose failure could cause the extinguisher to spray erratically. And if we think about the service
side using the incorrect coding example, this might make this software susceptible to bugs.
3. The third step of the FMEA process begins with assigning of a severity rating to each effect. The severity rating is based on
the relative scale that starts from one to ten which is assigned based on the knowledge and expertise of the team members.
The maximum rating of ten means that the failure has a dangerously high severity in terms of the impact on the customer,
where a rating of one means that the severity is extremely low, and the customer may not even notice the impact. For
example – Let’s take the example of a pressure gauge failure where the team assigns a severity rating of ten, as it could lead
to a serious injury or death. On the other hand of the service side, any coding error could lead to a software bug which could
be rated as a nine on a severity scale because of the impact on the customer.
4. The fourth step in the FMEA process is to identify potential causes for each of the failure modes that were previously listed.
For instance, if we again take the example of a locking pin in the fire extinguisher, a failure could result from a
malfunctioning of machinery. On the service side, the coding error could creep in due to a limitation of the software or the
inbuilt debugger.
5. The fifth step in the process of developing FMEA process involves assigning each failure mode an occurrence rating. In
which case the team estimates how likely, or how frequently failures could occur in the product or in the process. The ratings
are again set on a ten-point scale. One of the best ways to determine this is based on actual failure data of past products or by
using previous experience from a similar product. For instance, if we again take the pressure gauge issue on the fire
extinguisher, the team could determine the occurrence rating as a four.
6. The sixth step in the process of development involves determining the existing process controls for each of the failure modes
that were identified. The controls, tests, and procedures should be identified to understand whether they have reduced the
likelihood of a defective product reaching the customer or not. For instance, let’s take the case of fire extinguisher, the
pressure gauge is checked by a quality assurance team before the product is shipped to wholesalers. Where if we look at the
coding error, both the inbuilt debugger and a separate debugging in the testing process offers controls to the system.
7. The seventh step involves determining the probability that each control will be able to detect and prevent the failure mode, or
its cause. Again, we use a ten-point rating scale to assign a detection rating where ten means that it’s extremely unlikely to
detect the potential failure mode, and one means that we are extremely likely to detect potential failure mode. The team may
determine that the detection rating for the pressure gauge is a three, which means that the problem is quite likely to be
detected. And for the coding error, the team assigns a rating of four.
8. The eighth step involves developing a failure modes and effects analysis to calculate their risk priority number (RPN). RPN
is calculated as,

                          RPN = (Severity Rating X Occurrence Rating X Detection Rating)


Here RPN is calculated for each item. Since each of these three items in the formula above are rated on a scale from one to
ten, then the lowest possible RPN is a value of
one, and the highest RPN value is
1000. Let’s take the example of a pressure gauge again, in which case the team calculates the RPN value as 120 as they
multiply the severity rating of ten by the detection rating of three and the occurrence rating of four. 
9. After all of the RPN numbers are calculated, the team can prioritize them to identify what recommended actions should be
taken. This assist in eradicate or reduce any high-risk failure modes in order to reduce the risk to the customers in their own
organization.
10. Once the recommended actions have been implemented, the last and the final step involve calculating the new RPN
numbers. So once changes are made to the process or the product, we need to go back and update the severity, occurrence, or
detection ratings as appropriate, and then recalculate the new RPN number. In which case the team can continue to monitor
and maintain the results, and use the FMEA as a continuous improvement tool by continuing to go after and implement
corrective actions based on the highest RPN numbers.

Failure Modes and Effects Analysis Worksheet

Failure Modes and Effects Analysis worksheet captures all of the potential risk and failures. The top part of
the FMEA in the header captures information about the product and processes under consideration. Failure
Modes and Effects Analysis is helpful with document control. Then we know when the FMEA was prepared,
and who prepared it, and when revisions occurred.
The main part of the FMEA worksheet captures all of the information about each step and features of the
process, and the potential failure modes. In this case, the step is the receiver, and the failure mode is a
dropped signal. Then the effect of a dropped signal could be a loss of service. This helps the team provide a
very systematic approach to identifying failure modes, their causes, understanding how the process is being
controlled, and then what potential risks we have. It helps the team to identify and prioritize those risks so
that we can take appropriate action.

Ratings of Severity, Occurrence, and Detection


Risk Priority Number (RPN) is a very useful aspect of FMEA as it very helpful for process improvement teams to
prioritize potential effects and focus on the improvement efforts. There are primarily three main components of
the risk priority number – Severity, Occurrence, and Detection.

•Severity ranks the seriousness of the effect of a potential failure.


•Occurrence ranks the probability that a particular failure will actually happen.
•Detection ranks the likelihood that a failure will be detected before it reaches the customer.
Each of these is typically on a scale of one to ten, although some companies have modified these to put them on
a scale of one to five. A lower number in each of these categories means a lower risk and the higher the value, the
higher the risk, or higher the chance.

In order to calculate the risk priority number for each failure mode, we must be able to rate the components that
go into the RPN – severity, occurrence, and detection. Each of these components uses the rating scale to help
prioritize the failure modes. The rating scale should be focused on mainly three things – a range of ratings again
typically from one to ten, a description of what each of those ratings means, and the criteria for each rating.
Some of the generally accepted scales, however, most organizations use their past experience and judgment to
develop their own rating scales that are appropriate for their own products and services. The primary key is to
make sure that the ratings are clearly defined and understood to allow for an objective prioritization using
whatever rating scales that we develop for the organization.

Severity Rating
The severity in an FMEA is an indicator of the seriousness of the problem. If we use a rating scale of one to ten,

•When we look at a rating of one to two would have very low or little effect on the actual customer. When we talk about
the criteria or definition, the failure would probably not be noticeable to the customer, and would not significantly impact
the product or the process.
•When we look more at the mid-range in the rating system at a five or a six, this has a moderate or significant impact on
the customer. And the failure would result in the partial malfunction of the product, so the customer would not get all of
the intended value from the product.
•When we reach a scale of nine or 10 this has a very high or catastrophic impact on the customer, and this could be a
potential safety issue, or could result in an injury or death.

Illustration: Let’s take an example of a Green Belt working in an assembly department of a toy manufacturer, with
an assignment to improve the painting process, such that the severity of the failure could have a slight effect on
performance. And if it’s a rather discerning customer, the non-vital faults could actually be noticeable. And this
would result in the customer being slightly annoyed, so the team might give this a severity ranking of a six then.

Occurrence Rating
Occurrence rating is defined as an indicator of the frequency of potential failures. We now take a look at a generic
occurrence rating scale with a rating scale from one to ten. A lower rating of three or a four means that the
chance of occurrence is low, or relatively few failures would occur. But if we look at our potential failure rate, it
means that we would have one occurrence every one to three years. However a higher rating such as a seven or
an eight means that the chance of failures is high, such that there are repeated failures (with one occurrence per
week) If we look back at our toy painting example, the probability of occurrence is fairly low. They have
occasional failures, so about one every three years. Therefore, the occurrence rating would be a three. The final
component of the risk priority number is a detection rating. It is an indicator of how likely we are to detect an error
before it reaches the customer.

It is equally crucial to understand while determining the detection rating that our customer might be the next step
in the process, or the final customer who is actually using the product. We consider a generic rating scale again
using ratings from one to ten. Low rating of one to two means that it is almost certain that we are going to catch
the defect before it reaches the customer i.e., the defect is completely obvious, and it will be caught. On the other
hand, a rating of seven or eight means that our chances are pretty low for detecting the defect before it reaches
the customer.

Let us take an example that the product could be manually inspected during a process. Now if we look back at
the example of the toy manufacturer, the failure mode of the scratches in the paint had a very effective process
control in place. And therefore the probability of detection or prevention was pretty good, and the team
determined the detection rating should be set at four.
RPN Calculation
Once the ratings for the severity, occurrence, and detection have been detected, it is required to calculate the risk
priority number. As we know that the risk priority number is calculated by multiplying the severity by the
occurrence and by the detection.

RPN = Severity x Occurrence x Detection


For instance, if the severity is seven and occurrence of three, and a detection of nine, then the RPN number
becomes 7 x 3 x 9, and therefore, the RPN number is 189.

Before we know what does this value tell us about the process and what is the value of 189 mean, we have to
know enough about the other steps within our process to see what are other RPN values are. Note that the RPN
ratings are relative to each other and they are only meaningful once we have compared them against other RPN
numbers in the failure modes and effects analysis. The RPN number really provides its value by helping the
teams prioritize where they need to take corrective actions and once we have made those corrective actions, we
need to go back and recalculate the risk priority number.

RPN = 7 x 3 x 9 =189

Illustration: Let’s take the example of baggage handling at an XYZ Airline process where the team has identified
two specific problems but not sure which problem should be addressed first.
•In the first problem the automated system that codes the baggage with its destination does not always record the proper
codes.
•With the second problem, the tags that are attached to the baggage and are used for identifying the destination sometimes
are not secured, and they are lost.

Now, when we look at the first problem we start by calculating the risk priority number using a scale of one to ten.
The team then finds that the coding problem has a severity of ten as it is imperative that the luggage arrives at a
proper destination for the customer. The team then locates that two out of every one hundred pieces of luggage
are coded incorrectly. Consequently, team assigns an occurrence rating of three, because it is not very likely to
occur. Also in almost every case, the problem is detected before the customer’s luggage is actually lost.
Therefore, the team assigns the coding problem a detection of three.

Now we look at the second problem with the baggage tags being lost, in this case the team assigns the severity
rating of ten, because without the tags, the baggage handlers have no way of knowing where the luggage is
going. In this case, the team decides that the occurrence rating should be a value of six because the baggage
tags fall off fairly frequently and are lost quite often as well. Then the team determines that the detection rating
should be a value of three because most of the time the baggage handlers realize that the piece of luggage is
missing, and it’s tagged before it’s actually loaded. In this case the Risk Priority Numbers for problem one is given
by,

For Problem 1,                                                 RPN = 10 x 3 x 3 = 9 = 90

For Problem 2,                                                 RPN = 10 x 6 x 3 = 9 = 180

Now, when the team determines prioritization they decide to focus on problem two because the RPN number is
much higher. In general, the higher the RPN number, the higher the risk.

Constraint of RPN
•We should only compare the ratings from the same analysis to make sure we get a comparative analysis.
•If we have two or more problems that have the same or similar RPN numbers, then we should prioritize and focus on the
one with the highest severity number as the priority as this has a biggest impact on the customer and it also becomes the
highest risk in terms of what could go wrong and what the customer sees.
•We must take into account organizational priorities as it will help us to determine the risk priorities.

Illustration
For instance, if the organization is very risk averse, then we might have a general role that any RPN value over a
certain value must have corrective action actively being worked on to reduce the RPN number. Once the
corrective action has been implemented, then we can use the original and revised RPN numbers to calculate the
percentage reduction. This helps the team to evaluate the effectiveness of the corrective action. The percentage
reduction is determined by subtracting the revised RPN number, which is noted as RPN sub-r, from the initial RPN
value, which is noted as RPN sub-i, and dividing the result by the initial RPN number. Now we consider the
baggage handling example again, let’s say that the suggested actions resulted in a revised RPN number of 60 as
compared to the initial value of 180. So, when we calculate our percentage reduction, we start by subtracting 60
from 180 and then dividing by 180. This result in a percentage reduction of 67%, which indicates that the airliner
was able to reduce the risk associated with the tagging process by 67% by implementing the recommended
corrective actions.

% Reduction in RPN = 

DFMEA and PFMEA


The primary objective of Failure modes and effects analysis is to identify risk within the products and processes,
and then prioritize the potential risks to make sure that the customer’s requirements are met.

There are various points of similarities and differences when we talk about a Design FMEA and a Process FMEA.
So let’s consider the differences first.

Dissimilarities between Design FMEA and Process FMEA

Design FMEA Process FMEA


Process FMEAs are useful in finding problems in
Design FMEAs are useful in uncovering
currently operating productions and operation
problems with a product or a service design.
processes.
Process FMEAs are typically done before
Design FMEA is typically done before the
production starts when we’re setting up a
production begins so that we are being proactive
production process. They are done during normal
by identifying and reducing risk before we move
operations of the process as a continuous
from the design to production.
improvement effort.

Similarities between Design FMEA and Process FMEA


•Both Design FMEAs and Process FMEAs use severity, occurrence, and detection rankings.
•Both Design FMEAs and Process FMEAs look at why a product might fail or might not meet the customer’s
requirements and expectations.
•Both Design FMEAs and Process FMEAs are an effort to continuously improve the products and services for the
customers.
•Both Design FMEAs and Process FMEAs are methods to identify problems, good results, safety hazards, product
malfunctions, or shorten product lives.

Illustration of the uses of


Design FMEA and Process FMEA
•Design FMEAs could be used on things such as an air bag in a car that might not be working properly, or a defective
temperature control mechanism.
•Process FMEAs could be used in a chemical manufacturing process, or to investigate potential causes of an inadequate
mixing time.
•In Design FMEA we assume that the process is going to operate as intended, and therefore the focus is on product design
itself.
•Process FMEA assumes that the product design meets the intent, and therefore the focus is on the process itself, and it
doesn’t take into account the design aspects.
•Design FMEA considers the technical and physical limits of the production process; it doesn’t use the process
characteristics to overcome the design weaknesses. The focus is really on improving the design itself.
•The focus of the Process FMEA is on the process, it’s important to understand that when we perform either type of
FMEA, we may want to include some overlapping elements.

For instance, if a team is performing a Design FMEA, it does not require including process-related failure modes
causes, if the effect is covered by the Process FMEA. As a team, we are required to decide what is appropriate.
We might decide to consider some of the process elements in the Design FMEA. And likewise, when a team
starts working on a Process FMEA, they don’t necessarily need to include failure modes, and causes and
mechanisms from the design, but the team might choose to include some of the Design FMEA elements. One of
the reasons for this is that when we start creating the Process FMEA, typically it originates from the process flow
chart and because of this it might include some potential effects from the Design FMEA when it’s appropriate.

Design FMEA and Process FMEA Worksheets


Since the process of conducting a Design FMEA or a Process FMEA requires that the team must use a worksheet
to capture this information.

•When we start conducting a Design FMEA, the first column includes the features or functions of the product or service.
But while we are conducting a Process FMEA, each of the process steps is listed in the first column. The failure mode
identifies how the product might fail under specified operating conditions. Then the effect is what the customer
experience is based on the failure mode.
•The next column is a cause, and this is the potential cause of the failure, and this is what the eventual improvement
solution should focus on. It is crucial to note here that we need to make sure we must not confuse failure modes with
causes. These are separate columns that are used to record these. And a failure mode is any way in which the design or
process may fail to meet its intended function or requirements.
•The final columns of the FMEA worksheet include the controls, ratings, RPN, and response columns. The Control
column is where we identify existing measures that are in place to detect our failure modes. While performing a Design
FMEA these are the design activities and tests that are used to prevent or detect failure modes. And with the Process
FMEA, these are tools such as statistical process control and other methods to prevent or detect failure modes.
•In the Ratings column we record the team’s assessment of the severity, occurrence, and detection of each failure mode.
Such that the RPN number represents the product of the severity, occurrence, and detection rating for each failure mode.
•The Responses column this is where we record the actions that we will take or have taken to improve the design or the
process. In a Design FMEA these are the actions taken with the motive to improve the design and this could be to detect,
reduce, or lower the severity of a failure mode. Whereas in a Process FMEA, these are the actions that focus on
preventing the failure mode from occurring. Nonetheless, when we talk about reducing the severity of the process failure
mode then that might require design improvements, or process revisions, or both.

Illustration of Design FMEA


Let us take the example of a Design FMEA that was performed for a satellite receiver. Before starting to perform
a Design FMEA, it is crucial to understand the customers’ requirements and the intent of the engineer’s design.
This helps to outline expectation from the design itself. The top portion of the worksheet, gives us information on
the item itself, like who is responsible, for preparing the FMEA worksheet, team members, and the revision date.
For the Design FMEA, we must understand and analyze how the receiver functions so that we can identify any
design weaknesses that could be improved. Then the failure mode helps to identify what could go wrong. This
means first the potential failure mode that was identified by the team is a dropped signal and the second failure
mode is a weak signal reception. We then move on to the effect. The effect of a dropped signal is a loss of
service, and the effect of a weak signal is diminished quality of service.
Now continuing with the Design FMEA, we can now add causes, controls, and ratings to the FMEA worksheet. So
when we consider the failure mode of dropped signal, transmitter failure is determined to be one of the potential
causes. Then atmospheric interference is determined to be one of the causes for the weak signal reception.
Since atmospheric interference cannot be controlled, none is entered as a control. When the team looks at
transmitter failure, they determine that a random inspection is being performed. Using this information, the team
can next determine the severity, occurrence, and detection rating for each of these failure modes. As we see
while calculating RPN,

RPN number for the dropped signal = 8 x 3 x 5 = 120

RPN for a weak signal reception = 8 x 7 x 10 = 560.

Based on these RPN numbers, the team determines that they should take action on both. As we see the highest
priority should be focused on the weak signal reception as the risk priority number is significantly higher than the
RPN number for the dropped signal. Then the team verifies that they should improve resistance for the weak
signal reception to help improve the risk priority number. Then the team also decides that they should increase
inspection for the dropped signal. In which case, the team might determine to take action on both of these,
because of the high severity number on both.

Illustration of Process FMEA


Lets us now consider an example of a Process FMEA. We have another team working on improving their travel
booking services that decides to use a Process FMEA to understand how to better improve their service. The
team starts with analyzing step three of the booking process that requires making a reservation. The team starts
started by identifying two potential failure modes. These include the reservation not being recorded and the
reservation being made for the wrong room type. As they go through this, the effects are that the customer has
no room and that the customer does not have the right room. The team also identified the potential causes as a
lost network connection and a poor selection interface. With the process controls in place, the team also
performs a random quality assurance check. And in the case of the wrong type of reservation that is made, has
no control in place. After which the team goes through and determines the appropriate ratings for the severity,
occurrence, and detection for each of these failure modes. The team calculates the risk priority number of 168 for
the reservation not being recorded, and 192 for the reservation being made for the wrong room type.

As we see, there is not a significant difference between the two RPN numbers, so the team goes back and looks
at their ratings, and determines that they should prioritize and focus on their reservation not being recorded as
the severity is very high, which has a bigger impact on the customer. Now that the team has determined their
priority for improving their reservation process they start by developing corrective actions. In terms of their
reservation not being recorded, one of the issues was just using a random quality assurance check. Therefore, in
order to improve the process the team decided to change the process to include a verification activity. This
inclusion will help with the detection and further reduce the RPN number. Also in terms of making a reservation
for the wrong room type, the causes were determined to be a poor selection interface. Finally, the team decided
that they should change the selection button to a radio button, so as to prevent the error from happening and this
would also reduce RPN number for this failure mode as well.

Organizational – Problem-solving Approaches


The chapter focuses on the process of identifying appropriate Six Sigma projects. We will also look at some of the common
approaches that organizations may use to solve problems with reference to quality, cost, and customer discontent caused by
various issues within an organization, such as poor process implementation, products, or services.
Here, the Plan-Do-Check-Act (PDCA) cycle is an iterative process that functions as an underlying basis for most continuous
improvement methodologies. As illustrated earlier, the PDCA cycle
• In the Plan Phase we begin with the plan phase in which the team starts to understand the objectives of the processes, and
the customer’s expectations or targets.
• In the Do Phase, the team starts process execution and data collection which leads into some of the analysis.
• In the Check Phase, the team studies the actual results, and then compares these results against those expected targets.
• In the Act Phase, the focus is on implementing corrective actions i.e., the primary aim is to make sure the gap are closed
between what is expected and what is actually achieved.
We can use the PDCA cycle for any of the Six Sigma processes to understand how the goals and objectives for Six Sigma
projects are achieved.

Quality Circles
The Quality circles were developed in the 1960s in Japan. Quality circles can be defined as a volunteer group of employees
or workers with teams of typically six to eight people that meet regularly to focus on identifying problem, gather and analyze
data, and then finally generating solutions. Primarily quality circles can be described as a means for a team of people to work
together to improve quality.

This directs us into continuous quality improvement loop that works as an approach to quality management that is built upon
the traditional quality improvement methodologies. This approach emphasizes on the need to structure quality improvement
as a systems approach for improving quality with our organization. The quality circle approach focuses more on the process
itself and it adds the Deming’s philosophy which says that there are no bad people; just bad processes. So if we want to
improve processes, then it is required to improve the systems within an organization.

ISO 9000
ISO 9000 was developed by the International Organization for Standardization, or ISO. ISO 9000 is a quality management
system which assists an organization to define, establish, and maintain their quality assurance within a manufacturing or
service industry.

Total Quality Management (TQM)


Total quality management (TQM) is defined as a system which was developed in the late 1980s and early 1990s with an
objective to assists the organization-wide efforts to make sure that the organization continuously improves in order to deliver
high-quality products and services. The objective of TQM is to assist change the culture of the organization to make it a
permanent change.

Business process re-engineering


Business process re-engineering is defined as a business management strategy. This strategy was started in the early 1990s
with the focus on analyzing the design of workflows and business processes within the organization to assist the
organizations to reorganize the process workflow so as to improve the process of meeting customers’ needs and expectations.
Lean Methodology
The Lean methodology was derived from the Toyota production system, with the focus on reducing the waste within the
system. The primary aim of the Lean system is to reduce the overall lead time within the process, and eliminate activities that
are non-value-adding. The methodology helps us to deliver only what the customer believes as value which in turn helps we
reduce the costs associated with delivering that product or service. Commonly used to Lean tools are – Value Stream
Mapping, 5S, Standard Work and Visual Management.

Six Sigma Methodologies


The Six Sigma methodology was developed in the mid-1980s at Motorola with focus on identifying root causes and reducing
variation within the processes. With an objective to reduce the variation within the processes, the team is able to provide a
more consistent product or service to the customer. The concept of Six Sigma methodology is majorly based on the Define,
Measure, Analyze, Improve, and Control, or DMAIC.

Six Sigma Assessing Readiness


After an organization decides to use Lean Six Sigma as a problem-solving approach, it is required to perform a
comprehensive assessment, which includes a readiness assessment to comprehend if the organization is really ready to
implement Lean Six Sigma. The process of readiness assessment involves three main steps
• Assessing Organizations outlook: The first step involves assessing the organization’s outlook and future path. This
requires an understanding the critical business processes at this point in time, and if any change is required to help
improve those critical business processes. For this the team must look at the bottom line and cultural and competitive
needs. It requires an understanding of how the firm is meeting the customer’s expectations as compared to the competitor.
Also it is required to check if the culture of the organization is ready and financially, if it requires to make changes at this
point in time and put the time and investment necessary into it. When we start down the Lean Six Sigma path, we need to
make sure that we are selecting the projects based on where we need to go as an organization.
• Evaluate Current Performance: The next step in the process of readiness assessment is to evaluate the current
performance. This involves understanding and the need to get a baseline of where the organization currently stands. The
key question for this step of the readiness assessment is to understand the current results for the output, defects, yield and
variation. This gives an idea of the baseline of an organization. This helps we address a second question which is to
understand how the organization is currently meeting customer requirements. It involves understanding the gap between
the expectations from the customer and the current baseline performance. Also it is required to look at the production
processes to determine how efficient they are in terms of rework, waste and cost per unit.
• Change and Improvement: The last step involved in the process of readiness assessment is to look at the organizational
systems and determine the type of capacity for change or improvement we have within the organization. At this point it is
required to understand the organization’s outlook, its future path and then it is required to have an understanding of what
the current baseline performance is. Using the given information, we can understand how well the system is going to
handle the change, and how effective the current improvement approaches will be to help to reach that change that is
needed within the organization. It is also required to assess the change management structures within the organization.

Interpreting Results
Once the process of readiness assessment is complete, it may be required to go back and interpret the results.
• Try to understand and analyze if Lean Six Sigma is critical to the business needs, baseline, and cultural and competitive
needs. For this one of the key factors involves having a clear strategic course for the organization. This means that if this
is not in place, it’s going to be very difficult for the organization to buy into the Lean Six Sigma approach, as there is a
not a burning platform and in need right now that’s clearly linked to the strategic organizational goals. As a result it’s
seems to be very difficult to get the momentum behind implementing an initiative such as Lean Six Sigma.
• In the second stage of assessment, the motive is to evaluate the current performance for the organization and trying to
understand the customers’ requirements. This indicates that in order to understand that if we already have strong processes
in place helping us to meet the current performance, then and we don’t have a big gap to fill between our current
performance and the customer’s expectations, such  that would be to be difficult to implement Lean Six Sigma. Also it
may actually be unfavorable to the other current systems that are in place. So we may be better off staying on course with
our current initiatives.
• In the final step of the assessment, it looks at the organizations capacity for change. This requires assessing if the potential
gains are really going to justify an investment as big as Lean Six Sigma. Also if the firm already has systems in place that
are reaching our current needs, and if there aren’t right resources and the culture in place, else it might be very difficult.
It may be very overwhelming to employees and resources to implement an additional process improvement methodology,
particularly in case they are already meeting our customer’s goals.

Six Sigma – Process of Project Selection


Following are the suggested steps that must be undertaken for project selection –
We begin our Six Sigma initiative with project selection as it is an important aspect of deploying Lean Six Sigma that helps
to dictate how well the Six Sigma methodology will be applied. Particularly if we are initiating the Lean Six Sigma
methodology within the organization, then it becomes all the more important to select the most appropriate projects. Ideally,
training projects are the initial projects which companies are implementing during their Lean Six Sigma training. Therefore it
is important to ensure that the projects need to be aggressive yet realistic so that it is successful proposition.
The key step involved in the process of project selection for Lean Six Sigma initiatives are –
• Project Short listing: First step starts with short listing the projects and its opportunities. In this step we are required to
understand the current business performance and how it relates to the customer’s expectations. After which can build up
improvement opportunities based on the current performance and how the customer’s expectations are met.
• Selection Criteria: The second step involves determining the selection criteria. While there are general guidelines we must
modify them slightly as per the organizations requirement. It is very important to know the criteria that would specifically
relate to the organization, and the expected outcome of the projects to help the organization with its Lean Six Sigma
projects.
• Prioritize product Opportunities: Once we have determined the selection criteria, we can then use that criteria to prioritize
the product opportunities. This will assist us in ranking the list of project opportunities based on the selected criteria, and
then we could move ahead with selecting the most appropriate or best project opportunities for the organization. One of
the ways to prioritize Lean Six Sigma projects is with the help of a priority matrix.
• Using a Priority Matrix: In order to use a priority matrix, the first feature would be to have criteria development based on
the organization’s requirements. Some of the common criteria for selecting the projects could include – sponsorship and
level of support from management, their projected benefits, resources required, and the scope of the project, the clarity of
the deliverables, time to complete, team membership requirements, and the value of the application.
• Assigning Criterion Weights: Once the criterion has been developed, as an organization we must weigh the importance of
each of these criteria. Note, the weight should add up to a value of one as each of those weights is essentially a percentage
given to the importance of each of those criteria.
• Project Scoring: Now the management will look at each of the criteria and score each project opportunity based on those
criteria. A scale of 0 to 10 is typically used to rate each project opportunity and then their weighted score is calculated for
each project opportunity by taking the score for each of those categories, multiplying it times the weight, and then
calculating the total weighted score. The value for the total weighted score will then be compared for each project
opportunity and the highest total weighted score would be the top priority for the first project that we would target for
implementation.

How to use a prioritization matrix?


Organization uses a prioritization matrix to prioritize three different project opportunities which involves three steps.
• The organization uses the criteria to measure their projects.
• Based on each criteria they develop the linked weight for each criteria.
• Later the team scored each project against each criteria using a scale of 0 to 10.
Weight of each score = Weight for each criteria X Score for each criteria.
Illustration
Let us assume that for Project 1 and Criteria 1, the weight was 0.16 and the score was 6. Therefore the team took 0.16 times
6, which calculates to a weighted score of 0.96. Once all of the weighted scores were calculated, the teams determine the
total weighted scores by adding up all the weighted scores for each specific project opportunity. For Project 1, the total
weighted score was 6.64.
Now let’s assume the total weighted score for Project 2 was 6.81 and for Project 3 was 7.01.
Here we see that, since Project 3 had the highest weighted score, this is the project that was prioritized as the highest.

Selecting Six Sigma Methodology


After the Six Sigma projects have been prioritized the next important thing is to decide which Lean Six Sigma methodology
must be used. There are primarily three different types of Six Sigma projects.
• DMAIC: The Six Sigma methodology is mainly used when we are improving an existing process, product, or service. The
Six Sigma methodology includes five phases known as Define, Measure, Analyze, Improve, and Control, or DMAIC.
These five phases involve defining the problem statement, measuring the current baseline to understand how the current
process operates, analyzing the relationships between the inputs and the outputs of the process to determine the key
variables, improving those key variables, and then controlling the process through monitoring the process to ensure
sustaining the gains that achieved.
• DFSS: Design for Six Sigma is a methodology is used while developing a completely new product from scratch or while
using Six Sigma to improve the process to a point, but can’t be reached any further improvement. Now, this typically
happens when the processes get to about a 4.5 sigma. At this point we must go back and redesign the products to get those
further gains and continue to meet or exceed the customer’s expectations. DFSS projects can take the form of IDOV or
DMADV projects –
• IDOV stands for Identify, Design, Optimize, and Verify, and is typically a little bit more sophisticated in terms of the
testing and validation because it uses a systems engineering approach to Design for Six Sigma projects.
• DMADV stands for Define, Measure, Analyze, Design, and Verify, and is typically used for more of the core new designs.
• Lean Methodology: The Lean methodology primarily uses the framework of kaizen. Kaizen is a Japanese term which
means small incremental improvements. There are mainly four different kinds of kaizen applications – Project kaizen,
Process kaizen, Kaizen blitz, and Bemba kaizen.
• Project Kaizen: When we talk about a project kaizen, it uses Lean methodologies within the projects to drive out the
waste as the process improvements are made or designing improvements with our project.
• Process Kaizen: The process Kaizen, involves specifically applying Lean tools to a process to improve it.
• Kaizen Blitz: Kaizen blitz is done in a small time frame such as a three to five day event where a team of typically six to
eight people work on implementing Lean within a specific area.
• Gemba: But with gemba kaizen, gemba means to go to the place. So we’re performing a kaizen event in a specific
location, or a specific place, where the process improvement needs to occur. Kaizen is best used when there is going to be
a high impact on efficiency. It is suggested good to use kaizen events where there is going to be a low effort required to
complete the project. The kaizen events to go after the low-hanging fruit. And this also ties into using kaizens when we
have a high probability of success, and doing it in an area where there is not a politically charged project. Since kaizen is
a Lean tool that focuses on reducing waste within our process, it’s also good to use kaizens in processes where we need to
reduce the inventory or the work in process, also referred to as the WIP.

Step in kaizen process


• First step involves conducting a strategic business analysis to recognize where we currently are as a business, and this
involves understanding how currently meeting the customer expectations and the type of competition we have.
• Second step involves conducting a value stream analysis. It is very crucial to map the current state of the process to
understand how the process currently operates and whether there are any potential wastes within the process that could be
removed.
• The third step involves analyzing the current state and using that information to identify kaizen event opportunities where
there are potential wastes within the process that should be removed to better meet the customer’s expectations.
• The last step is to plan that pilot kaizen event based on those kaizen opportunities.

Indeed there are various desired outcomes while performing benchmarking on a Six Sigma project. The features
of six sigma benchmarking are –

•Benchmarking help to understand what is achievable by figuring out who is best in class and in what area.
•It enables the Six Sigma team to determine how they should focus on their improvement by seeing what is best in class.
•Benchmarking also helps to develop realistic targets and determine ways on how we should go about achieving these
targets.
•Benchmarking helps the Six Sigma team to determine how they should adopt the best practices.

Now for understanding that ‘who is best in class’ and ‘identifying the characteristics’ which the organization is
currently not achieving, like a gap analysis can help considerably with determining how to achieve each of these
benchmarking outcomes.

The process of Six Sigma benchmarking involves three key layers –



Measuring current performance – In the first layer as an organization and as a Six Sigma team, we need to understand
the current baseline performance. This process assists us to analyze whether we need to make improvements based on
who is considered best in class.

Determine the cause of performance – This means understanding why the current baseline performance is currently
where it’s at and why it is set at a higher level.
•Emulate the work practices: Emulate the work practices of whoever is best in class in that area to help in improving the
current performance and elevate the current performance closer to best in class.

Firstly it is important to understand what benchmarking. Benchmarking is primarily a tool for understanding the
best in class. This information is used as a method to perform a gap analysis to understand where the business
is falling short to the best in class standard. Benchmarking can be considered a learning process to understand
how to improve the process. Benchmarking is a partnership as it involve working to improve the process by using
another organization or another department within the own organization to benchmark. By developing a
partnership with that other department or the other organization, we both can learn from each other.

Benchmarking can also be used as a warning system as it helps to identify where we are falling short based on
the competition. Based on which using the information as a learning experience to better understand how the
organization operates, and where its strengths and weaknesses are. Clearly Benchmarking is not easy as it is
difficult to understand the gaps are and how to better improve the organization.

Some of the common benchmarking misconceptions are –

•Benchmarking is not a foolproof method as there are constant changes within the business environment. Therefore
making changes always may not necessarily always take us where we need to go due to constant updates and changes.
•Benchmarking is not necessarily be spying or stealing. Benchmarking should be treated as a partnership effort to
understand how we can better improve the process.
•Benchmarking should not be subjective. It uses facts and data to better understand how to improve the own processes
compared to the competition.
•Benchmarking is also not a remedy for all of the problems. It is considered as a useful method to understand how to
measure up to what is best in class.
•Benchmarking is not a strictly competitive analysis tool, as it could also benchmark against organizations that are not the
competition. As the focus is on the process itself and that function of that process and how to make it better. This means
we could be looking at other organizations not the competition, but could be organizations that conduct a similar process
and try to determine other opportunities for process improvement.

Types of Benchmarking
There are two types of benchmarking namely – Internal Benchmarking and External Benchmarking.

Some of the points of differences between internal and external benchmarking are listed below.

Basis of Distinction Internal Benchmarking External Benchmarking

  In internal benchmarking, we
compare groups within the In external benchmarking, we
  organization, and this could be compare peers and competitors
Comparison from one department to the next within our marketplace.
department.

  Internal benchmarking helps External benchmarking helps to


organizations gain self- reveal the competitive
  knowledge as it involves environment which aims to spur
Purpose benchmarking our own innovation by looking at what the
processes against each other. competition does.

  Internal benchmarking can be External benchmarking tries to


very easy to implement because benchmark own organization
  it doesn’t take much in terms of against another organization, so it
Implementation extra resources or efforts to requires a heavy investment of
compare our own processes. time and resources.
There is also a low risk of failure Since costs of external
  as there is no dependency on benchmarking require a heavy
Risk external sources to gain new investment of time and resources
ideas therefore the risk level is high.

Benefits of Internal Benchmarking


•Internal Benchmarking holds a mirror up to the organization so that we can look in and understand how the own
organization is operating.
•Internal Benchmarking helps to identify what those in-house practices are that are considered best practices so that we
can use these to measure and benchmark the own processes against these.
•Internal benchmarking also helps to measure the value of a unit. In the process of benchmarking, we aim to look at
differences between projects and personnel and departments to understand where those best practices are, and then use
that information to compare the own processes to other units within the organization.

Benefits of External Benchmarking


There are primarily three different types of external benchmarking –

•Competitive Benchmarking – Involves looking at the competitors.


•Best practices Benchmarking – Involves looking to see who’s best based on that type of process to gather the best
practices.
•Collaborative Benchmarking – Collaborative is when we work with other organizations and we benchmark against each
other.

Some of the benefits of external benchmarking are –

•External benchmarking can be very beneficial as it helps the organization to gather knowledge from the leading-edge
competitors to understand what they are doing differently which would then internally help to improve own processes.
•External benchmarking also helps in identifying demographic trends and potential product niches. This process gives a
competitive advantage by understanding what the trends are that are happening within the industry.
•External benchmarking also facilitates in revealing an organization’s competitive strengths and weaknesses by
comparing it to external competition. Based on this understanding as organization, we can prioritize the strategic
initiatives to help make sure that we are moving in the right direction.

Illustrations
 
1.Under external benchmarking, competitive benchmarking involves studying who the leading competitor is and
understanding why they are the leader in that area.

For instance, let us consider a company that wants to develop a new mobile phone. They could go through and
look at who is their leading competitor in terms of specific areas that they are trying to improve. It can be the
technology itself on a cell phone. So they would go through and look at who is that competition that’s best in
class and understand that aspect of the product that makes them the leading competitor. In addition, if we look
into industries such as healthcare and if we look at regional hospitals, they might be looking at their competition
to see who has better standards and ratings when it comes to performing certain procedures. Consequently they
could go through and do a competitive benchmarking analysis to understand why other regional hospitals are
better in specific areas than they are, and then use that information to improve their own internal practices.

2.Under external benchmarking, best practices benchmarking involves looking at what the process or function they are trying
to achieve, and looking across different industries, not just within their own industry, to determine who is best in class and
who has those best practices.

For instance if we look at a rental car company that wants to improve their customer experience, they might look
outside of their industry to see who is best at greeting customers when they’re coming in and when they’re
leaving. So they might look outside of their industry and look at the hospitality or hotel industry where it’s very
important to have a pleasant experience when guests are checking in and checking out of their hotel room. They
could take back the lessons from those best practices from the hotel industry and use them within their own
rental car environment to help improve that customer experiences when customers come in for their rental car
and when they come back to return their rental car.

3.Collaborative benchmarking is when two organizations, or a consortium of organizations, work together to benchmark
against each other. This is different than competitive benchmarking because these organizations are typically in agreement
with each other. They agree to work together to help improve their own organizations by helping each other. Collaborative
benchmarking is common in industry, but is probably the most common within the healthcare industry. Because as a whole,
the healthcare industry really wants to make sure that they’re working together to ensure the best possible outcomes for their
own patients, and for patients as a whole. Collaborative benchmarking has been widely used in performance indicators
within healthcare and how they are using quality and Lean tools in tools such as the Baldrige Criteria to help improve the
patient experience.

Process Components
We say that a process occurs when we take inputs and transform these inputs in such a way so that we can
deliver outputs to meet the customer’s expectations. Such that the process itself involves the resources like
people, materials, energy, equipment, and environment that are essential to transform those inputs into the
necessary outputs. Some of the common examples of business processes include things like manufacturing
steering knuckles for trucks, accounts payable process, process of hiring new employees etc. Such that each of
these has specific inputs into the process and has a very specific output.

A process has several characteristics.


•Inputs – Inputs that make up what’s coming in to develop the product or service.
•Process – The process itself is how that product or service is assembled or changed in some way that it provides value in
terms of what the customer’s expecting.
•Outputs – The outputs are what are actually being delivered to the customer, whether it’s a service or a product, or a test
that’s being accomplished. The overall process is a series of events that produces outputs and they’re defined through
several steps.
•One of the characteristic of a process is that it has boundaries. This is the beginning and end points of our process.
•It is equally important to pay careful attention to those transition points because this is where mistakes commonly
happen. This is where information could be misunderstood or not properly transferred across. A process swim lane chart
is a special form of a process flow diagram. It adds in swim lanes to indicate when there is a transition point between
people, departments, divisions, or processes. It can be used to better understand the cross-sectional business processes,
because it breaks down the process itself into who is involved in the process; whether it’s different departments or
individuals; and the actions that are necessary. And then it also takes the process into account over time because the
process moves from left to right.

Any time the process moves past one of the swim lanes, this is a transition point where we need to be careful and
manage the documentation and the process to ensure that no information is lost as a product or process
transfers from one department to the next.

In general, the process swim lane diagram starts with the creation of a new supplier form. The documentation is
uploaded and it’s submitted for approval by the originator. Then the process transfers down to SharePoint where
the general counsel is notified. Then it moves to general counsel, and that process may go back to the originator
or stay with the general counsel, until it reaches the next step of approving a new supplier and updating the
supplier library. At this point, if we go back to SharePoint or go to the supplier and then to accounts payable, such
that each step of the process that crosses a swim lane is a potential area for defects or misinformation.

Illustration

Challenges in Process Improvement

Cross-functional challenges
Mainly there are three key sources of challenges that can arise while trying to carry out a process improvement
effort such that the process crosses various functional areas within the organization. These three sources of
challenges start from –


Stakeholders: Stakeholders can become a challenge when we have got various functional areas that work together yet
may not understand each other’s stakeholders. Therefore it is essential as these functional areas start to work together they
must have an initial discussion about who their true stakeholders will be from this combined effort.

Communication problems: When different functional areas work together, such as manufacturing, design, and after-
market, then these three separate functional areas are all within a manufacturing organization. Yet there might be slight
differences in terminology which could be used and this could lead to communication problems.

Cross-functional teams: An organization may have individuals working in cross-functional teams that are used to
working together within their own functional areas. But by bringing together these cross-functional teams across the
different functional areas, they are required to learn to work together again and function as their own cross-functional
team.
One of the primary challenge areas is stakeholders that exist in each functional area within the organization.
Some of the common difficulties when it comes to stakeholders –

•Getting buy-in from a variety of stakeholders across the various departments and processes.
•Managing multiple process owners and leaders.
•Prioritizing the processes between the different departments.

The cross-functional challenge stems from trying to manage communication. It can be very complex within an
organization when we have got different functional areas that might use different terminology. For instance, if we
analyze the three departments within a manufacturing organization which include Finance, Product creation, and
Sales, it is observed when they are trying to work together to develop a new product that comes from the product
design group, the sales organization needs to make sure that there is a significant market to justify the new
product, and they will need to start drumming up new business for it.

Now, finance is concerned about making sure that the design is cost effective and that there is going to be
sufficient revenue from that new product. As each of these different groups talks to each other and uses slightly
different terminology, and they have different focus areas, that communication can be very difficult to manage to
make sure that everybody’s working together towards that common goal. The third cross-functional challenge is
with cross-functional teams. This type of team can create challenges where it’s a little bit more difficult. So
bringing together individuals with different experiences and different backgrounds from their functional areas,
where there’s a different communication style and different technical terminology, it can be a bit more difficult
trying to bring all of these individuals together to truly function as a cross-functional team that’s working towards
achieving a common goal.

Process variables
During the process improvement phase, the primary objective of a Six Sigma project is to reduce variation within
the processes and provide more consistent product or service every time to the customer. In the Six Sigma
projects, it is imperative to have a thorough understanding of the process and thereby understand and identify all
of the inputs to the process as it is the most typical; source of variation in the process. Indeed there are different
resources that can add increased variability into the process and ultimately affect the process output. Some of
the common inputs include the material, people, energy, equipment, information, and any other resources that are
input into our process. Let’s consider manufacturing process which involves castings that are coming in to be
manufactured. The variability may come from the castings, people that are working on the processes and the
equipment to produce the parts. These include the inputs as well as information received on how to produce this
product. All these are possible sources of potential variation.
We say that process outputs are the result of a process. Such that the output must match with what the
customer has been asking for. Process output is what the customer sees value in the product or service, they
have ordered or requested. Indeed the output should also be something that’s quantifiable and measurable. As
while delivering the right product or service to the customer, it is crucial that we can measure that output to
ensure that it’s meets the customer’s requirements.

For instance, outputs could be the service received from a call center or consultation from a doctor like a medical
diagnosis. These are typically final outputs. Also it is important to note that, an output from one process could be
an input to a subsequent process. For instance, if we take into account a manufacturing process, there’s a
foundry that makes the casting. The casting is the input to the next step, which could be the manufacturing
organization, the machines, the casting. As we look at our Six Sigma projects, we need to understand our
processes and how there are internal and external customers, and how the outputs from our process could
potentially be inputs to other processes.

In Six Sigma, the association between the process variables is also a important aspect. The inputs and the
outputs of the process are interrelated, and it’s what happens between the inputs and the outputs that’s the key
focus of the Six Sigma projects typically. Also, the process converts the inputs into the outputs, so that they are
what the customer is expecting. In which case we need to make sure that the customers’ expectations are met.

As said earlier, Six Sigma focuses on understanding this relationship so that we can know how changing the
inputs, which could be one of the major sources of variation, in a more controlled method can help us achieve the
desired outputs for the customer. When all of the different inputs into our system are well understood together
with the variation that comes from those input, and the relationship between the inputs and the outputs, then we
can initiate by managing the variation and reducing the impact of that variation within the processes on our
outputs, to better meet our customers’ expectations.

Using SIPOC
SIPOC diagram is majorly an extension of the process flow diagram which extends beyond the inputs and the
outputs to include the suppliers and the customers. The SIPOC diagram represents the suppliers, inputs,
processes, outputs, and customers.

The SIPOC Diagram helps in improving the process by giving a detailed picture of all of the relevant elements that
are involved in the process improvement project.

Steps involved in the process of developing SIPOC diagram are –

•SIPOC diagram starts by collecting a complete list of suppliers by the team.


•The team starts by identifying the start of the process and by identifying all the suppliers involved in developing the final
product.
•After the list of suppliers has been developed, the team then creates a comprehensive list of all of the inputs into the
process.
•After identifying all the inputs, the steps involved within the process that convert these inputs into the outputs is chalked
out step by step.
•The team then develops a comprehensive list of the outputs which could be products, services, or tasks that are
accomplished based on how the inputs are transformed by the process.
•Last step involves determining all of the customers that receive the outputs from this process.

Once the team develops a comprehensive list of the suppliers, inputs, steps of the process, outputs, and
customers, this detailed information is used to build the SIPOC diagram which is added to a table to document
the information.

Illustration: We now take into account a SIPOC diagram using a service example for a Corrective Action process
– Manufacturing and Distribution.

Now, when we look at the SIPOC Diagroam carefully,


Suppliers, would be the customer service because they’re helping to supply information into the manufacturing and
distribution process. There would also be Regional Sales Managers, Producing plant and Distribution Center.

Inputs into the process would include Product Problem Report, Manufacturing QC records and Supplier QC records.
•The
process steps would involve let’s say failed complaints received, problem confirmation, root cause investigation,
containment actions, root cause investigation, corrective action plan, verification closure and corrective action validated.

Outputs to the manufacturing and distribution process would include say containment plan, in-house stock reworked,
closed corrective action and product design or process changes.
•Finally
Customers would include Regional Sales manager and Customer Service.

Project Stakeholders
A project stakeholder is someone who has a stake in the project.
• Primary Stakeholder: A primary stakeholder is someone who directly benefits or gets affected by a certain business
activity. Let us say this could be a product or a change to a service agreement. The primary stakeholders include
employees, investors, company owners, creditors, and suppliers etc.
• Secondary Stakeholders: The Secondary stakeholders are anyone that has a more secondary stake in the project
functioning. Secondary stakeholders may include members of the public or families.
Illustration
Lets us consider an organization that makes lighting systems for residential customers, in which case,
• Primary stakeholders would be the customers that actually receive the product i.e., the lighting systems. Employees that
help to make the product. Vendors that buy the product and sell the products to the customers. Stockholders for the
company.
• The steps involved in the process improvement with Six Sigma, it’s vital to identify the stakeholders. This can be done
using brainstorming sessions, for instance, with the project sponsor and the team. So when we brainstorm with the team,
there are several key questions that we should ask and consider together. These questions may include who would be
impacted, who would provide the inputs, who will use the inputs, who might hamper the progress, and who uses the
process. These key questions will help we identify who should be included in the list of key stakeholders for the project.
Stakeholder management is critical to the success of a project. Let us say a stakeholder is not supportive, or they are against
the project, this impact can be extremely unfavorable to the project success. There are times when a stakeholder may even
want to obstruct the progress of a project, potentially just because there’s a competing interest and a lack of time to focus on
the specific project.
Some of the factors to consider with reference to every stakeholder –
• Understand their level of interest in the project and its outcome.
• Potential impact or power they can have over the outcomes.
• It’s important to understand the goal of stakeholder management – it’s to get reluctant or potentially hostile stakeholders
on the side of the project to the point where they’re positive about the project. We also want to get uninterested
stakeholders encouraged and actively supporting the project.
Some of tools used to help manage stakeholders are –

Stakeholder register: This tool is primarily used to identify and manage stakeholders. The table used for the stakeholder
register shows the overall attitude of the key stakeholders, and their impact on the project. It may include some other
useful information on the stakeholders. Additionally to their name, it also includes their role, their attitudes towards
process improvement, and their potential impact. Some other information that can be included is their contact
information, communication requirements that might help change their attitude towards the projects, and the expectations
that they would have on these projects as well.

Stakeholder Matrix: This is a very useful tool that can be used to manage stakeholders is a stakeholder matrix. This tool
also helps to identify the stakeholders. In this case, stakeholders are added to a two by two grid according to how they fit
into certain characteristics. For instance they could be high power or high interest. Some other features that are commonly
used are the power versus interest, power versus influence, and impact versus influence. The stakeholders that land in the
high-high quadrant are the ones that require the most attention.
Process Owners and Stakeholders
When we start a Lean Six Sigma project, it is very essential to start identifying the key players that are important for the
project success. One of the most important players in the process of execution of Six Sigma is the Process Owner. The
process owner is the person managing or overseeing the process that is going to be worked and executed on for the process
improvement. As a result they need to thoroughly understand the process in question that’s the subject of the Six Sigma
project.

Process Owners
‘Process owners’ are the people responsible for the overall success or the failure of the process improvement effort. It is also
important to have the process owners involved as the Six Sigma process team aims to make alterations to the process for
improvement, so if we don’t have buy-in from the process owner, it’s going to be very difficult to make the changes
necessary for the process improvement. In case we make the desired changes to the process, and once the project is
complete, it would be very difficult to have sustained success if we don’t have that buy-in from the process owner.

Project Champion
An additional key player in the Six Sigma initiative is the ‘Project Champion’. The project Champion is a high-level
individual sitting on the top at the management or executive level. A project champion is one that helps to provide resources
and support in terms of the Six Sigma initiative. The project Champion is someone holding typically fairly limited
experience with Six Sigma.
Some of the responsibilities of a project champion are,
• Project champion would assist in project selection as they know the long-term strategic vision of the organization.
• Project champion act as a liaison between the long-term strategic goals of the organization and the Master Black Belt.
• Project champion help to make sure that the right Six Sigma projects are being selected that help move their organization
in the right direction. Now they wouldn’t necessarily have all of the statistical and process improvement knowledge to
train or mentor any of the Black Belts or the Green Belts or the Yellow Belts.
• Project Champion would be the individuals that would help provide the resources to make sure the projects are getting
done, and they are successful.

Enterprise Leadership Team


The Six Sigma leadership starts at the high-level executive team or top-level management which makes up the enterprise
leadership team.
• They help the organization to provide the strategic vision and make sure that Six Sigma is being implemented throughout
the entire organization.
• They help to provide the resources and support for the training.

Business Unit Leadership Team


Under the enterprise leadership team, comes the business unit leadership team. At this stage the projects start getting selected
within each business unit to ensure that the overall success of the organization can be achieved through the process
improvement projects.

Improvement Project Team


Below the business unit leadership team is the next level i.e., the improvement project team. These are the teams that are
directly working on the Lean Six Sigma projects. The team makes sure the projects are getting done and they’re actively
involved in the process improvement activities.

Team Supporters or Employees


Below the improvement project team we have the team supporters or the employees. The team supporters are people who
might be called in on an ad hoc basis to make sure the improvement project team can meet the requirements and their
deadlines. The Black Belts and the Green Belts professionals are a part of these team supporters and employees to help pull
in the right employee to get their projects done.

Team leader and Coaches


The team leader and the coaches typically sit at the level just below or around the business unit leadership team. There are
primarily two groups of leadership when we speak about a Lean Six Sigma initiative. These are the enterprise leadership and
the Six Sigma team leadership.

Enterprise Leadership – The enterprise leadership is a group that establishes a vision. They have a strategic view of the
organization and they can use this to help allocate the appropriate resources to make sure that the Lean Six Sigma projects
are actively being worked on. Due to the strategic view, the enterprise leadership team also has an enterprise focus. The
other aspect of the enterprise leadership team is that they can remove roadblocks and help facilitate change because of
their role within the management structure.

Six Sigma team leadership – The Six Sigma team leadership is a group that implements the vision. They have more of
an operational orientation towards process improvement. They manage their resources that are provided by the enterprise
leadership team to make sure that they can get the projects done in an adequate time frame. This team also has a project
goal focus because of their role within Six Sigma process improvement. They are able to manage the roadblocks and
manage the change necessary to make sure that they can reach their goals towards their project improvement initiatives.

Six Sigma Green Belt: Another key role in the Six Sigma initiative is the Six Sigma Green Belt. The Six Sigma Green
Belt is an individual in the organization that’s using Lean Six Sigma tools directly as they relate to their own function
within the organization. They’re mentored by the Black Belts and typically receive their training from the Black Belts.
Their projects focus mainly on their own functional areas rather than Black Belts who could be distributed throughout the
organization for the process improvement efforts. The Green Belts focus on the process improvements that make their job
easier, which roll up into improving the overall effectiveness of the entire organization.

An organization planning to implement Six Sigma projects works towards the central goal of meeting or
exceeding the customer’s requirements and expectations. Now, when we think of requirements we need to look
at those requirements from two different perspectives and angles.


Voice of the Business (VOB): Now the first perspective is Voice of the Business and this is essentially considers the
business requirements.

Voice of the Customer (VOC): The second perspective is the Voice of the Customer with focus on meeting customer’s
requirements and expectations.
We shall now consider some of the points of differences between the two perspectives,

Basis of Distinction VOB VOC


VOB, is the process of capturing
  the business requirements which VOC  is a process of capturing
indicate the needs or constraints the customer requirements which
  of the internal customers, and represents the needs and the
Process these include the requirements desires of the organization’s
that relate to the process, cost, customers.
safety, and delivery
  VOC focuses on external
VOB focuses on internal
Focus customer requirements and
efficiency and productivity.
needs.
VOB is directly influenced by
  VOC is directly influence by
the necessity of solving
Influence customer’s needs and expectation
operational problems and
expressed in their own verbatim.
improvement issues.
VOB are business requirements
VOC are stated customer
  which includes aspects such as
requirements including aspects
Requirements our profitability, return on
such as value, price, durability,
investment, cost and
and features.
productivity.
When we talk about business requirements these are essential drivers to our organization. These requirements
are typically derived from mandates, goals, and objectives from the organization itself, and these business
requirements deal mainly with inputs. There are also constraints from budgets, or governments, or regulatory
bodies that are typically related to resource limitations and may be imposed technical or budgetary constraints
on a project.

On the other hand, the needs of the customers are typically expressed in terms of what are going to need more
qualifications because they may be unclear. Therefore it may require some more investigation to really
understand what the customers need and do not take literally. At times these could be expressed as terms such
as attractive, effective, profitable, or rapid. Therefore it becomes important to not just go word-to-word but also
drill down to really understand those needs. These customer objectives are typically given in ‘measurable’ terms
that indicate what features a customer would like, typically in terms of quality, or the purpose of the output, or the
functionality the customer is looking for. Therefore it becomes crucial to gather the information from the
customer since, as an organization, it would help to determine the products or services that should be offered
and where focus the process and quality improvement efforts accordingly.

As an organization it becomes important to understand that we need to gather information on both the Voice of
the Business and the Voice of the Customer to get those two different perspectives. This information, would help
an organization understand the overlap and that overlap is what helps to provide quality customer support and
delivery to meet the customer’s expectations required for the organization to be successful. This would help to
ensure that the organization meets or exceeds the expectations of those customer requirements.

Six Sigma – Voice of the Customer (VOC) Strategy


While adopting the VOC strategies the following five key tasks are included for successful execution.

•At first, we start by defining the goals of implementing Voice of the Customer process.
•Next identify the customers, the customer types, and the segments in relation to the projects.
•After clearly charting the focus customers group, next step involves planning and collecting data.
•After all the data has been collected, the next step is to analyze the data that has been collected.
•The last and the final step involve determining the customer requirements and then use these customer requirements as
action goals.

Illustration
We consider the example of a Six Sigma project at a technology retailer to demonstrate how to implement the
Voice of the Customer strategy. Now, within this technology retailer, the sales have not been up to the mark and
also some of the customers have indicated that they aren’t happy with the billing process. The process of
involves the following steps in implementation –

1.The first step involves defining the Voice of the Customer process goals. The Six Sigma team here at the online retailer
confirmed that the process of the Voice of the Customer exercise was to determine the reason for the customers to be
unhappy with the current online billing process.
2.In the next step the team determines the purpose statement of determining why customers are dissatisfied with their
company’s billing process. Thereby the team next moves on to identifying their customers. Now since they are an online
retailer, the customers that they would want to pitch are the online buyers using company’s web site within a specified
demographic group and market segment. Such that the team wants to make sure that they are capturing the Voice of the
Customer from their key customers and their key customers are between the ages of 18 and 30 and they’re based in a specific
area.
3.The third step involves planning and collecting the Voice of the Customer data. Now the store collects customer
information through their online profile, the team collects information about regular customers having billing issues in the
past month. Together with the information on those customers who have filed online complaints or called the helpdesk about
their issues. Thereafter the team can collect information from those complaints and also collect follow-up data from
customers where they need to fill in the gaps.
4.The fourth step in the online technology retailer involves developing the Voice of the Customer strategy to analyze the data
and then determine the customer requirements. In this step the team analyzes all of the data, which helps them to determine
that customers want immediate confirmation of their payment, an online receipt, and that is very important to correcting the
billing process. In which case, the team was able implement this by using Six Sigma tools such as the Kano model, Quality
Function Deployment, and Critical to Quality Trees. This not only helps to identify the trends in the marketplace and
determine ways to better meet the customer expectations.
5.The last and the final step involve determining the customer requirements. It seems from the analysis by the team that they
were able to understand that their customers enjoy the store’s low prices, but what they actually want is better service at the
checkout, and better information about their payment. In which case, the key customer requirement was focused on
improving the payment gateway and the checkout and the billing processes. The team then sets the goal of meeting this need
as their top priority by making sure that the payment gateway was the key focus of their project; consequently, it became the
basis of the team’s Six Sigma project.
Types of Customers
An organization working towards process improvements by implementing Six Sigma Project, must aim at first to
understand their customers. With the implementation of Six Sigma it is extremely critical to identify all customers
and know all of their expectations and priorities as they are essential for process improvement.  Typically there
two types of customers namely internal and external.


External Customers: The external customers are the customers that are outside of the organization and are typically the
final customers. External Customers are the ones that pay for the products and services and that consume our products
and services. External customer of the process could also be a channel purchaser, in other words a distributor, a retailer, a
reseller, or independent representatives within the supply chain. So if we talk about a banking operation, or depositing
cheques into the bank account, then the external customers are external to the bank. These are the patrons of the bank that
have bank accounts within the organization. When we talk about healthcare, the external customers would be the patients;
the ones that are coming in to be treated for an ailment, or coming in for their annual checkup.

Internal Customers: We have internal customers that are within our own organization. Internal customers are our
internal coworkers and they also play an important role in understanding the expectations of all of our final customers.
Here internal customers are those customers that are internal to the organization. Internal customers get value or are
affected by downstream processes and who work to serve the external customers. The internal customers are also people
that work to make the products and services available to the external customers. Lets us take an example to illustrate it
further, when we think about a receiver of the next operation, if it’s an internal department, it should be thought of as an
internal customer. Likewise if we think back to the healthcare sector, when a physician submits a request for prescription
in a hospital, an internal customer could be the pharmacy within the hospital. Another example is when we think about
banking sector, where someone comes in to apply for a bank loan they see a loan agent; however, the next step of the
process, and whether or not the loan is approved, is typically handled by a loan manager or a department manager. They
receive all of the information and, therefore, they are a customer to that information within the process.
One of the most useful tools for understanding the customer value chain in the internal and external customers is
a SIPOC diagram, which involves defining the suppliers, inputs, processes, outputs, and customer’s diagram. The
SIPOC diagram documents the external and internal customers, the entire value chain that helps to indicate and
illustrate the internal and external customers within the process. When we consider a process, both internal and
external customers fit in the organization’s value chain and they can be depicted on the SIPOC diagram. The
internal events or processes overlap each other at the hand-off points between our internal customers, producers,
and suppliers. And the point where the processes overlap, form the links in this chain. Each link is an area where
our internal customer satisfaction can be tracked and improved and then we can see the impact it has on our
final external customer.

Impact of Six Sigma Projects


Before we begin with the execution of Lean Six Sigma projects, it is very crucial to understand the potential
benefits that the project might have on the internal customers as these benefits often translates to benefits to our
external customers as well. Therefore we must ensure to carefully listen to the needs of both internal as well as
external customers.

In particular for internal customers like suppliers, process owners, and employees linked to a particular process,
product, or service that we aim to improve with our Lean Six Sigma efforts, brings along several potential
benefits.

•It tends to build more streamlined, efficient, and a less complex process.
•There will be typically some general skills building, re-training, and retooling as part of the Six Sigma project.
•The process improvement efforts, can also alter the process inputs, change the output requirements, or change the tool
requirements that have an impact on the scope and the schedule of work.
•The altered scope and schedule also might impact how we streamline our procedures and methods within our process
environment.
In conclusion to the above benefits, the external customers typically see improved process outputs by making
these changes through Lean Six Sigma efforts, which are often transformed to our external customers. Some of
the benefits that Six Sigma projects have on external customers are –

•Clearly the benefits that internal customers realize are often translated to the benefits as well for the external customers.
•External customers typically see improved quality, delivery, and service based on the continuous improvement efforts of
the Six Sigma team
•The process of Six Sigma team prioritizes the projects they were working on that addresses the most pressing needs and
requirements for the customers.

With the focus of the team focuses on making improvements within their Six Sigma initiatives, they are
continuously going back and talking to the customers – both internal and external – in order to make sure that
they understand the customer’s requirements; therefore, it leads to better communication with the customer. At
last, with the process of making improvements in quality, delivery, and service, we aim towards improving our
communication with our customers leading to increased external customers satisfaction with the company in its
service and products offered by the organization.

Customer Data
The process of collecting customer data is extremely essential for improving Six Sigma improvement project. The
customer data collected by the Six Sigma teams helps in implementing the Voice of the Customer strategy for
translating the raw customer data into precise customer requirements during the Define phase of the DMAIC
project. In which case these critical customer requirements must be identified and measured before moving on to
the next stages of a Six Sigma project. In general, the organizations like to collect customer data in terms of the
number of defects or trends. Some of the benefits of collecting customer data are –

•The process of collecting customer data, allows the teams to identify urgent problems that being faced. Thereby helping
the organization, to gain a competitive edge over the competitors.
•Customer data helps in knowing the customer’s preferences and needs to do this.
•It helps to define what the products and services are that we should offer and also the critical features and specifications
for those products and services.
•Customer data also helps to determine the desired level of quality from the customer.
•It helps the organization to measure the customer satisfaction to ensure that they are meeting or exceeding the needs and
expectations of the customer.

Data Types
Now the data that is collected have two sources it can be either primary or secondary data, depending on the type
of information being collected.


Primary Data: Primary data sources come from direct interaction with the customers. Some of the sources of primary
data sources could be observations, interviews, focus groups, or surveys. While using primary sources of data, we must go
straight to the customer to collect the data. This is the reason why these methods are called ‘direct sources’.

Secondary Data: The Secondary data sources are known as ‘indirect sources’. These secondary sources already exist in
the system. This data can be information collected for another purpose, such as a different project. Some of the examples
of secondary data include industry experts, market watchers, other external sources, and other internal projects.
In general, under a Six Sigma project, we use both sources of data collection primary as well as secondary data.
The process of collecting the data from the Voice of the Customer aims towards understanding the needs and
the wants of the customers, and it therefore becomes important to use that information in a general input
process output flow to understand the overall functionalities.
Steps involved in the process of data collection are,
•The process of data collection involves understanding the customer’s requirements at first
•Then this data is converted into useful information, such as identifying those desired characteristics for all of the raw
materials, quality levels, and process steps.
•This helps in further defining the processes to reach the desired output.

We must keep in mind that the process of data collection from our customers is not a one-time event. Since
customers’ expectations and requirements change over time, it therefore becomes important to have an ongoing
customer feedback loop with a defined process in order to have good organizational processes and
responsiveness, so that the team is well prepared to respond quickly to such ever-changing customer demands.
This process of continuously gathering information enables to meet all of the customers’ expectations and
requirements. This continuous feedback loop should be established to be alert to the changes to the customers’
preferences.

Tools for Data Collection


Some of the commonly used data collection tools include observations, interviews, focus groups, and surveys.
We will now discuss in details each of these so as to know under which condition.

Observation
The first data collection tool is observation. The process of observations involves keeping a watch on the
customers and the process of using the product or service, and in the environment where they would actually be
using the product or service. The process of observation therefore gives real-life environment and since there is
no control over their actions, it typically depends on how they would use the product or service. It also helps to
get a much more realistic first-hand understanding of how they interact with the process, product, or service. This
process is considered beneficial to use if we want to observe the customers in their normal environment. For
instance, if we observe the customers using the product in a lab-type setting where the work situation is set up
and each of the participants are interacting. The process of observations should be used when we want to get
insight into what it is really like to be one of the customers, or if we need to see specifically how something is
being used.

Interview
The next tool for data collection tool is an interview. Now the process of interviews are conducted with
customers either face-to-face, or over the telephone. Thus process of interview is typically conducted to in
gaining a reasonable, unbiased view of the product or services from the customer’s perspective and also
interviews help in ensuring that none of the issues are overlooked that could negatively affect the value and
customer satisfaction.

The process of conducting an interview is a useful for data collection tool when we need to get an individual’s
perspective rather than a group’s. Also the interviews conducted face to face, help to gather other information
such as facial expressions and body language from the customers, or the tone of voice if the interviews are
conducted over the phone.

Some of the benefits of interview for data collection –


•Interviews helps to gather a unique perspective as there are more open-ended type questions.
•Interviews help to pursue unexpected lines of information as we can go further down that path and look for clarification,
and this type of method really helps we pursue that.
•Interviews help to provide a more in-depth understanding of what we’re trying to gather from the customer.
•These insights gathered from the customer may help lead to further innovation in the products and services.
•Face-to-face interviews can be very helpful because they provide an opportunity to really create a rapport with whom
we’re trying to interview, and this may not be as easy to do if we’re trying to collect this information over the phone.
•Face-to-face interviews allow capturing visual cues, such as body language or facial expressions.
•Face-to-face interviews also allow us to make eye contact so that we can pursue more complex questions and have a
more in-depth discussion.
•Face-to-face interviews can be a much more personal way of gathering data from the customer to understand their needs
and expectations.
•Interviews over the phone help gather information from customers that are widely dispersed over a large geographic
region.
•Telephonic interview are also good for collecting information about basic or simple issues where we are required to talk
about more complex issues and pursue follow-on questions.
•Phone interviews are useful as they get them done in a fairly short time frame and provide a quick turnaround on the
information.
•Phone interviews are a good way to get a lot of data at a fairly low cost.

Focus Group
Focus Group is another kind of data collection tool that typically consist of a small group of current or potential
customers who are involved in more of a structured discussion. This method allows for more spontaneous
information rather than interviews. It may use discussion to further explore participants’ needs, opinions, and
attitudes in more detail. Focus groups help in gathering information from a small group of customers, ideally ten
people or less, that helps in gaining their feedback in one place at one time. Focus group is a good setting to help
with brainstorming and really watching the customers bounce ideas off of each other. Focus groups are
suggested when we are trying to gather information from customers who have similar product or service needs.
We can conduct a focus group of several people that are in a similar segment.

Let’s for example, we have purchasing managers from several companies who intend to buy the product for their
own manufacturing needs, in which case we could gather those groups together to get their insight. Focus
groups are ideally more beneficial than other methods as it is not as restrictive as the survey. In which case can
ask more open-ended questions and also we can meet with a group of several participants at once, it’s not as
time consuming as doing individual interviews. Another benefits of Focus group for data collection is that it is
very helpful as we can introduce props. This means we can tie in the product’s prototypes and marketing material,
and then we can see how customers interact with these props to gain further information from them.

Surveys
The next type of data collection tool is surveys which are extremely useful since they are very versatile i.e., it can
be conducted in person, over the phone, or by e-mail. Surveys allow collecting extensive information from a large
group of customers such that the responses gathered from them are quantifiable, and this means we can have
statistical output where other methodologies provide more qualitative data.

Some of the benefits of survey method used for data collection –


•Surveys can be used in conjunction with any of the other forms of data collection methods.
•Surveys are very useful as it’s a good way to gather quantifiable data in a very consistent way since a survey is exactly
the same every time.
•Surveys are the most useful when we have a basic or simple issue that we’re trying to solve.
•Surveys can be sent out broadly, and we need not set up interviews or focus groups, usually completing in a very short
time frame with a very quick turnaround.
•Survey helps to get a lot of data for very low cost.
Selecting Data Collection Tools
This job aid helps we select the most appropriate data collection tool for the information needs of the project.

Observation

Features Situation Strengths and Weakness Best practices

For best practice


 
– Used to observe the – It can help focus efforts – Be clear about why we
– Involves watching effectiveness of employee on what customers really
a person’s behavior are doing the observation
performance while need
in a given situation – Decide how we’ll
performing a process – It can be used with a
  observe customers
– Used to evaluate any small number of subjects
– Create and test an
customer-facing staff – It can identify where
– Considered most customers or employees
observation form or
reliable methods to during customer checklist
interactions in person, on have problems
understand how – Contact the customer
things are done the phone, or wherever – One-on-one basis – Ensure observers are
they interact makes it time consuming properly trained
– Perform data analysis
– Follow up with
customers

Interview

Features Situation Strengths and Weakness Best practices

For best practice


 
– It allows for greater – Clarify the purpose
interaction and visual – Prepare questions
cues and more complex – Decide interview
– Involves One-on- – Used to develop new questions and in-depth format
one questioning of discussion Determine number of
insights and pursue new
individuals, either in – It is labor intensive and interviewers and
lines of questioning as
person or over the expensive per interview interviewees
they develop
phone – It may require – Practice the interview
significant effort to reach – Contact customers
all needed respondents – Specify how we’ll
collect information
– Conduct the interview
– Transcribe and analyze
data

Focus Group

Features Situation Strengths and Weakness Best practices


– Involves Small – Used to measure – It provides useful
groups (usually reactions to concepts, key information about For best practice
fewer than ten features of a product, new people’s attitudes  
people) discussing a packaging, or advertising -It ensures greater
specific topic – Used to assess involvement because – Determine number of
focus groups are small participants
effectiveness of
and specifically tasked – Identify customers
advertising
– Used to generate ideas with testing ideas and – Specify questions and
for new products and gaining opinions
conduct practice session
– It can be costly and
resource intensive,
requiring facilitators and
services venues; participants may – Conduct focus group
– Used to provide need to be paid – Transcribe session
evidence for claims about – It may be considered
– Follow up
products less valid because the
opinions or reactions are
collected from such a
small sample

Survey

Features Situation Strengths and Weakness Best practices


– It can be in-depth
enough to allow the data
to be assessed in a variety For best practice
of ways  
– It can be used on large
– In this case, data is populations, increasing – Determine objective
collected from a – Used to get quantifiable the chances that a wider – Identify sample size
sample of a and statistically reliable range of respondents can – Draft questions and
population, typically data on a large population be assessed choose measurement
using questionnaires, – Used to confirm – It require considerable scales
and then inferences theories or information time and effort to plan – Determine how to code
are made about the we’ve developed using and execute responses
population as a other tools – Prone to scope creep, – Create survey
whole becoming unwieldy and – Ensure questions will
uninformative meet objective
– It has poor planning and – Conduct pilot test
badly targeted questions – Finalize survey
can render less useful – Send out survey
results

For having an effective collection of data, the data being collected must be valid, reliable and bias free. These
characteristics only will make the process more useful and hold up to the scrutiny while performing data analysis.
With reference to Six Sigma the three key terms that refer to accuracy in data collection are – Reliability, Validity,
and Margin of Error.


Reliability: Reliability refers to the consistency of the data collection method. The higher the sample size is in relation to
the population size, the more reliable it is.

Validity: Validity refers to the accuracy of the data collection efforts being made. The purpose here is to analyze that the
chosen data collection method truly measures what it seeks to measure, and if it does, then it must be considered valid.

Margin of Error: Margin of error ties into our surveys as they are subject to some uncertainty about how well a sample
represents a population, and the validity and reliability of the testing tool. In this case it becomes important to make every
effort to guarantee that the data is free of errors as it may affect both the reliability and the validity. This implies that the
error shouldn’t be so significant that it prevents from reaching valid conclusions.
Primarily there are two types of errors such as sampling error and non-sampling error.


Sampling Error: A sampling error is statistical in nature and is caused by human error. The sampling error from
surveying is where a portion of the population is surveyed versus getting a representative sample from the entire
population.

Non-Sampling Errors: Non-sampling error, are statistical in nature and is caused by human error.
Being a Six Sigma professional, it is very crucial to understand how effective data collection ties into Six Sigma.
The cornerstones for quality data collection are reliability and validity. This is essential to guide the Six Sigma
improvement efforts and minimize the margin of error. Therefore if the data is consistent, stable, and repeatable,
then we know that we can rely on the results being accurate. The major causes of ineffective data reading are
vagueness and ambiguity together poor instrument design, which leads to erroneous readings. So in order to
avoid such data collection errors it is important to avoid using subjective measures, as they may not be a clear
definition. Like asking nonspecific questions or asking the wrong questions that are not related for process
improvement.

Errors come into picture in the data collection efforts when we have poor results. When we conduct surveys,
focus groups, interviews, or direct observations, we must make sure that we have a sufficiently large enough
sample size. In case of insufficient sample size, results tend to be poor and do not necessarily represent the
entire population. Errors may also occur, if we receive the information from surveys, focus groups, or interviews in
such a way that they’re difficult to interpret, that might lead us to making erroneous assumptions, or using tools
incorrectly to analyze the data, and in the end we could make incorrect conclusions about our data.

Sources of Data Bias and Errors


Bias is one of the causes leading to ineffective data collection. Bias is a very common type of data collection
error that is systematic in nature. Bias can bring about significant error and incorrect conclusions in the process
of research and process improvement efforts. One of the reasons for such errors to occur is when data is
influenced in some way that it no longer represents the populations being sampled. Bias can also affect the
research and the process improvement efforts. For instance, if we have a survey respondent that gives us
responses which does not truly reflect their opinions as he wants to appear a certain way, or we might have an
interviewee that might be uncomfortable with the interviewer; so, they might rush through their answers since
they want to finish the interview.

Some of the common sources of bias and error in the process data collection –

•In case if we have too many questions, this could cause bias or error in the surveys and with the interviews. Since we
want to make sure that the surveys and interviews are fairly concise and with the focus is to include right questions else it
may lead to ambiguity trying to understand the customer’s true responses.
•In case of multiple questions in one question, may leads to error in all four of the data collection methods, surveys, focus
groups, interviews, and direct observations. Since it does not ask a specific and direct question, it becomes difficult to
capture and really get to the heart of what the respondent’s true response is for that question.
•Such bias and error in the process of data collection could also be caused by how we lead our phrasing. So if we are not
careful in how we word these questions, we might be leading our respondents in a certain unknown direction. Therefore,
we can also lead by behavior with surveys, focus groups, and interviews, Such that we can phrase those questions and ask
them in a certain demeanor, which we are again leading the responses of our customers down a specific path.
•Another source of bias and error in our data collection could also be caused by our surroundings, predominantly when
we consider interviews and direct observations. Since in such tools we are conducting these in the places where the
product or service is actually being used, we should be careful to make sure it is as realistic else it we might have some
outside influences that can bias our results.

Vagueness, Ambiguity, and Bias


We must review the data collection questions so as to help eliminate vagueness, ambiguity, and bias. Clearly
there are four factors that contribute to the ambiguity and bias that lead to ineffective data collection. Such as


Poor design: Poor design refers to poor instrument design which includes poorly designed surveys or poorly designed
questions. For poorly designed surveys, we wish to ensure that the flow is effective, and that anybody taking the survey
can easily understand the questions. With reference to poorly designed questions, it must be ensured that we don’t have
too many questions or we are not fully covering all possible ratings for each question. All these can lead to errors in our
data collection that makes data collection effort ineffective.

Subjective measures: Subjective measures lead to ineffective data collection. In the process of gathering the Voice of the
Customer through surveys, focus groups, interviews, or other means, it’s important to ask questions in such a way that
they are do not focus on asking for customer subjective perceptions. We need to ask for objective measures. This means,
rather than asking a customer how they feel about a certain situation, we could focus on asking them how often a certain
situation occurred over a period of time. This will help to provide more of objective measure and quantifiable data.

Incorrect questions: Incorrect questions also lead to vagueness, ambiguity, and bias, leading to ineffective data
collection. It therefore becomes crucial to ask the right questions. So in order to avoid asking incorrect questions, it
becomes important to go through for a pilot study. This requires developing the methodology for collecting data and
going through the methodology with a certain set of individuals initially to make sure we get the responses expected and
the data collected with this pilot study answers the appropriate questions.

Nonspecific questions: Nonspecific questions may lead to ineffective data collection; this means asking non-specific
questions leads to nonspecific answers. For instance, if we ask a customer about their overall experience during their last
banking transaction, then we are going to get a high-level view. But in case we try to understand the friendliness of the
tellers and how helpful they were, this involves very specific details about their experience with regard to those aspects so
handing the data is non-manageable. Therefore make sure that we ask specific questions around what we are trying to
improve.

Purpose: This job aid helps to identify potential sources of bias and error in four customer data-collection tools.
Some of the potential sources of bias

Bias and error in data collection

Sources Surveys Focus groups Interviews Direct observations

Too many questions


X   X  

Multiple questions in one question


X X X X

Leading by phrasing
X   X  

Leading by behavior
X X X  

Influenced by surroundings     X X

Sampling errors
X X X X

Poor results
X X X X

Difficulty interpreting
X X X  
 

Understanding Customer Requirements


The aim of the Six Sigma team is to understand the actual customer’s requirements. It is very important to
understand the requirements before translating them into product features, performance measures, or
opportunities for improvement with the Six Sigma projects. Primarily the customer requirements are based on
three elements – expectations, needs, and priorities.

We can define customer requirements are a complex group of criteria that customers uses to make their
purchasing decisions. For that reason, one must carefully consider the customers requirement to make sure right
information is obtained. An organization might feel that they offer the world’s greatest product, but if the
customer doesn’t want it, it has no value and they’re not going to buy it. It is mainly the customer that determines
the value of the product and they convey their requirements by either buying the product, or not buying the
product. This information helps to translate the customer requirements into the project goals and objectives to
help the organization become more successful.

Customer expectations can be defined as those requirements or expectations from the product or service that
makes it a value and worth buying. Customer expectations are typically classified in a hierarchy where the
customer expectations can be classified as basic, expected, desired, or unanticipated.


Basic Expectation: Basic expectations are the absolute minimum qualities that must be present in order for the product to
be acceptable. For instance, while buying a new television, a basic expectation would be that the television comes with a
remote control and it functions properly.

Expected Expectation: Expected expectation hold qualities that generally come with a normal product. For instance, the
television should include an instruction manual and a warranty that protects it from initial defects.

Desired Expectations: Desired expectations take us to the next step and these include things that the customer would
specifically ask for. For instance, they might want their television to be a HD television.

Unanticipated Expectations: These refer to those things that the customer isn’t even aware of, so they wouldn’t even
know to ask for it, which could be a market differentiator. For instance, the television might have an interactive display
and this would be an unanticipated aspect of the television that would bring cutthroat advantage and thereby provide
additional value.
We would agree that customer needs are always fluctuating and always evolving. Certainly these would benefit
the business if we are able to provide products and services that meet these changing requirements, but it can
also be very difficult to figure out what that is that the customers require. Now when the customer’s basic needs
are met, new needs are often created again.

Primarily there are different types of customer needs.


Stated needs – Stated needs are what the customers actually say they want, so we need to understand what the customer
is specifically asking for.

Real needs – The real needs refer to what the customers actually need.

Perceived needs: They are what the customers think they need.

Cultural needs: These refer to the status that customer’s think they will attain from buying that product or getting that
service. And then unintended results occur when customers use the product in an unintended way.
We must agree that customers have different needs depending on whether they are buying a product or getting a
service. So when we consider buying a product, there are six things that customers need.


Convenience: Every customer is engrossed in their lives and they want to buy products that help to make their lives
easier. For instance, pre-packed food items that are only require reheating to help save time.

Safety: Now a day’s safety has become an ever-increasing concern for customers and this is also a feature while choosing
a new products. For instance, an easy-to-use baby fence could meet the customer’s safety needs.

Simple: At all point of time customers are also looking for simple features. Indeed they don’t want to spend hours to
figure out how to use a new product. Simple features such as touch menu on a mobile phone, makes it easier for
customers to use new products.

Communication: At all point of time customers are also looking for communication. They need to feel informed and
they want access to information about the products they buy. An example could be a 24-hour hotline that would provide
information about the product and potential recalls.
Customers need good service for their product. So if they buy a defective product they want to know that the
company will refund their money or offer a replacement. Service also refers to warranties or exchange policies
therefore, customers want customer service. Customer want accesses to trained personnel that can help solve
problems or handle their complaints. For instance having a customer service employee authorized to offer
solutions without having to speak to a supervisor. The seven specific customer service needs are –


Convenience: Busy customers want services that offer them convenience. For instance, many companies are now
providing personal shopping for customers that do not have time to run their own responsibilities.

Courteousness: Customers require courteousness and civility when they buy a service. For instance, when we go to a
food court, we expect the server to be polite but if they are not helpful, then we’re probably going to leave bad reviews.

Competent and reliable: At all point of time a customer expects to deal with a competent and reliable staff member
when they pay for services. For instance, when a customer gives their clothes for dry cleaning, they expect the staff
member to not damage their clothes.

Responsiveness: Customers want to assure that when they have a problem with the service, the company and its staff will
be their respond quickly. For instance, when a hospital patient has a concern, they don’t want to wait hours to get a
response.

Safety: Every customer expects service wants safety to be their top priority. For instance, if a customer takes a cab, they
would want to assure that the driver has a license and can take them to their destination safely.

Trust: No customer would like to buy from a source which he does not trust. They want to know that their service
providers are being honest and truthful in their intent. For instance, when we go for shopping to a big brand we believe
and trust that the charges are correct.

Facilities: A customer visiting a service provider’s facility expects that the place has been properly maintained with
everything functioning properly. For instance, when we visit a bank’s ATM, we expect that we’re going to get the money
that we requested.

Determining customer requirement: The last element of customer’s requirements involves determining the customer’s
precedence. It therefore becomes important for the organization to understand the customer’s expectations and needs so as
to know what should be the company’s priority. For this the company may use data collection tool such as surveys,
interviews, or focus groups to pin down customer’s priorities. Note, identifying customer requirement is not a one-time
effort, since what might be a high priority today might be irrelevant in a year or so since customers’ expectations and
needs keep on-changing.
Therefore, it is very important to prioritize the customer’s needs and expectations in a way that the company is
able to respond quickly to the customer’s changing requirements. This will help us as an organization continue to
provide the products and services that the customers find valuable.

Kano Analysis
Kano analysis, or Kano diagram, is a methodology used to identify true customer requirements. The Kano model
uses quadrant formats such that it allocates each product or service characteristic into one of four different
quadrants. The diagram helps to graph where each aspect falls with respect to the performance of the product or
service with reference to customer satisfaction.
Now the X-axis measures the products achievements and functionality i.e., if the product or process performs
poorly or performing well. On the other hand the Y-axis represents customer satisfaction; it starts with low
customer satisfaction and then moves up to high customer satisfaction.

Here, Kano analysis addresses three levels of customers’ needs –


Dissatisfier: Dissatisfier refers to those basic level quality requirements that the customers expect to have. For instance, a
customer assumes that a accountant will be able to post the entry of his organization correctly and that an auto mechanic
will be able to fix a flat tire.

Satisfiers: Satisfiers refer to those little extras that keep the customers contented. For instance, satisfiers may be delivery
of pizza less than 30 minutes, less waiting time at a bank or getting a free oil change at a dealership.

Delighters: Delighters are the unexpected and exciting attributes that give customers more than they expect. Indeed as a
customer we might be delighted when a bank teller greets us by name, or an automobile service department at a
dealership washes and vacuums the car.
Quality Function Deployment (QFD)
Quality function deployment (QFD) tool considers the Voice of the Customer (VOC), and translates it into the
process, product, or service designs. The goal of quality function deployment is to translate that is often
subjective criteria into objective criteria that can be used to design and manufacture a product or service. The
QFD, methodology is divided into four key phases that either uses one or more matrices.

•First phase starts with product planning


•Second phase involves translation of information down to the part deployment
•Third phase involves translating down to process planning
•Fourth phase involves process control.

QFD enables the Voice of the Customer to be used throughout the high-level design and to how the processes are
managed and the critical characteristics to ensure that we continue to meet the Voice of the Customer
requirements. Quality function deployment (QFD) consists of four phases of interlocking matrices that connect
the Voice of the Customer, into the design requirements and the product features –

•First phase is the product planning phase, in which case the customer requirements are converted into the design or
technical requirements.
•Second phase involves taking this information in the part deployment phase. This is where the technical requirements
then become the input into the part development phase. Such that the technical requirements are converted into part
requirements.
•The third phase is the process planning phase where the part requirements are then taken into the process planning phase
to make sure that we have the appropriate process planning characteristics to fulfill the necessary part requirements that
we’ve identified. Such that in this phase, we also start to select our process.
•In this phase, the outputs of the process planning phase are used as inputs known as process control phase. In this phase
we develop the process control, inspection, and test methods to make sure that we are monitoring and controlling our
process.

With quality function deployment, by having all of these matrices interlocking, we make sure that the Voice of the
Customer is our key input to our process and it’s translated throughout each of these matrices to make sure that
we maintain and control the Voice of the Customer requirements. This helps to make sure that we meet or
exceed our customer’s expectations.

The first phase in quality function deployment is the product planning phase. This phase provides an integrated
set of tables that, when it’s completed, looks like the structure of a house and this is where it gets its name – the
House of Quality, or HOQ. The HOQ matrix allows us to identify the customer requirements and determine their
technical implications and the relationships between these different aspects.

House of Quality (HOQ) is an integrated set of tables created during a product planning phase in quality function
appointment. This initial matrix, i.e., the House of Quality matrix, allows us to identify the customer requirements,
or the VOC, to determine the technical implications and infer relationships between these different technical
requirements. Such integrated set of tables work together to take in that Voice of the Customer, those customer
requirements, and translate them into technical requirements. House of Quality matrix is divided into several
rooms – customer focus and the rest really help focus on the technical aspect and the interrelationships between
the customer and the technical issues.

Fig: HOQ Matrix

Let’s take a brief look at what a finished House of Quality could look like. There is a considerable amount of
information that is contained in just one matrix. The customer requirements are also located in the House of
Quality. Then we have design considerations and we also have different design alternatives that are put into this
grid. Then weighted scores are assigned based on market research so that we understand how we rank against
our competitors. This greater matrix documents the design team’s perceptions of the relationships that exist
between these different items so that the relationships could be interpreted in the downstream processes of
QFD. It’s important because the House of Quality documents how the product addresses and satisfies all of the
customer’s stated and unstated needs.

HOQ Diagram – Customer Focus


We shall now define the various rooms in the House of Quality in more detail.


Customer Requirement: The first room holds the customer requirements. Such that the customer requirements focus on
listing the customer’s needs and wants which in ways represents the Voice of the Customer for a particular product or
service. Now, these requirements are primarily derived from customer statements gathered during the initial phase of
quality function deployment. Many a times we may hear that the customer requirements are also referred to as the
“whats” of the House of Quality matrix, as this is what the customer requirements represent.
 The 
customer requirements component lists the customer’s needs and wants – the Voice of the Customer (VOC) – for
a particular product. These requirements are derived from customer statements gathered during the initial
phases of QFD. They are often referred to as the “WHATs” of the HOQ matrix.

Customer Importance: Now the second room in the House of Quality is customer importance as clearly illustrated in the
diagram. This room of the HOQ Matrix defining the customer importance component defines where the customer’s
perceptions and priorities are listed for each of the customer requirements or the Voice of the Customer. This information
used to prioritize has been in general gathered during the market survey. At this stage the team would determine, based on
the customer survey feedback, the importance for each customer requirement and then this information is rated on a
numerical scale to provide that prioritization.
The 
customer importance component lists the customers’ perceptions and priorities for each of the listed customer
requirements, as gathered during the market survey. The importance of each customer requirement is rated on a
numerical scale.

HOQ Diagram – Technical Requirement



Technical Requirement: Now the next room in the House of Quality contains the technical requirements. In this room
the team lists the measurable product specifications that are defined by the organization. This defines how the customer
requirements will be met. The technical requirements are often referred to as the “hows” in the House of Quality matrix
since this explains how we are going to meet the customer requirements.
The 
technical requirements component lists the measurable product specifications as defined by the manufacturing
company. They define how the customer requirements will be met, and must be characteristics that can be
measured and given target values. They are often referred to as the “HOWs” of the HOQ matrix.

Technical priorities: The next room in the House of Quality is technical priorities. The main motive of this room is to
detail the priorities and the measures of technical performance that are going to be achieved by competitor products. Also
it is important in this section to show the degree of difficulty involved in developing each of the requirements that are
assigned to the technical requirements. Which means this is how the product will be evaluated with reference to those
priorities throughout the entire design process. Clearly these values are based only on estimates of the probability of
achieving these target values at this point. Typically the technical priorities are rated on a scale of 1 to 5.
The 
technical priorities component details the priorities, measures of technical performance achieved by competitive
products, and degree of difficulty involved in developing each requirement assigned to technical requirements.
The product design is then evaluated with respect to these priorities throughout the design process. However,
remember that these values are based only on estimates of the probability of achieving the target values. These
technical priorities are usually rated on a scale of one to five.

Interrelationship: The next room in the House of Quality is in the interrelationship section. The interrelationship matrix
shows how the design team’s perception of the relationships exists between the customer requirements and the technical
requirements. These relationships between the requirements are classified as weak, medium, or strong. The purpose of the
interrelationship matrix is to make sure that all of the customer requirements have been address by a technical
requirement; else this would lead to missed customer requirements.
The 
interrelationship matrix component shows the design team’s perception of the relationships that exist between
the customer requirements and technical requirements. The relationships between these requirements are
typically classified as weak, medium, or strong.

Technical Correlation Matrix: The last and the final room in the House of Quality is the technical correlation matrix. In
this room the team identifies technical requirements that either support or impede each other in the product design. In
other words it’s important to understand the relationships between the technical or engineering characteristics so that
designers can identify what trade-offs need to be made. The technical correlation matrix, commonly called the roof of the
House of Quality, is where the team classifies these relationships either as positive or strong positive, if the technical
requirements support or help each other, or they’re classified as either negative or strong negative if the technical
requirements impede or hinder each other.
The 
technical correlation matrix component identifies where technical requirements support or impede each other in
the product design. In other words, it details the relationships between the engineering characteristics and helps
designers identify what trade-offs need to be made. These technical correlations are typically classified as either
positive or strong positive if technical requirements support or aid each other, or as either negative or strong
negative if technical requirements impede or hinder each other.

Creating HOQ Matrix

Steps involved in the process of creating a House of Quality are –

STEP 1 – Collect the Voice of the Customer information


This step involves further sub-steps. The first step in developing the House of Quality is to collect the Voice of the
Customer information. This information is first listed in the House of Quality, after which based on feedback from
the customer, the importance is ranked so that the Voice of the Customer can be prioritized. Then, a competitive
assessment is performed to look at how the customer perceives the competitor versus the product or service.
Since the first step is to collect the Voice of the Customer information, we will illustrate this with an example let
us consider a company that manufacturers dental surgical supplies. As per the customer research, the
customers are looking for pliers that have several characteristics. These characteristics include that the teeth on
the pliers must not strip, the handle must not break, the metal must not rust or tarnish, the metal must be shiny,
and the handle must be insulated against electrical shock; therefore, this information is placed in as the Voice of
the Customer.

The next step is to determine the customer importance. The customer research also implies the importance
rating for each characteristic and the higher values indicate greater importance. In this example, the teeth of the
pliers must not strip and the handle must be insulated against electrical shock are the two most important
characteristics for the customers; therefore, they’re both assigned a value of 4.

Within the next step of developing the House of Quality, we’re looking at addressing the Voice of the Customer
still, but now we want to look at the right-hand side of the House of Quality at the customer competitive
perceptions matrix. It is important that we understand and identify the customer perceptions of our competing
products.

If we look at the example of our manufacturer of dental surgical supplies, our customer ratings for the quality
requirements are compared against two successful competitor products that were acquired. These are Product A
and Product B. The ratings are given from 1 to 5 and they are entered into the matrix for the customer
perceptions component, where 1 is good and 5 is bad. So a lower number indicates better performance. This is
important before we go through and start developing our technical requirements because this is a good time to
understand how the product that we are developing is perceived based on our competitors. This is where we can
see where we need to improve our product or service to better meet market expectations.

STEP 2 – Develop a list of Technical Requirements


The next step in the process of developing our House of Quality, is to develop a list of technical requirements. At
this point, we’ve gathered the Voice of the Customer and now the quality function deployment team needs to
develop a list of technical design requirements that are needed to make sure we fulfill each customer need. It’s
important to know that there are two key parts in this step. The first is to identify the technical requirements and
the second is to examine the relationship between the technical requirements and the customer requirements in
the interrelationship matrix.

Let’s take a look at our example of our dental surgical supplier, the engineering members of the company’s quality
function deployment team developed a list of technical requirements to ensure that they fulfilled each customer
requirements. This are entered on the vertical axis and includes Rockwell hardness, chromium content, surface
finish, rubber thickness, and carbon content among other technical requirements. As we develop the technical
requirements, we also need to set the direction of improvement. Once the list of technical requirements is
entered into the House of Quality, the team needs to go through and identify the direction of improvement. The
direction can be an up arrow that indicates that a higher value is better to satisfy the customer requirements. A
down arrow can be used to indicate that a value should be as low as possible or a 0 can be used to indicate that a
specific target is required.

The interrelationship matrix is where the team brainstorms the interrelationships between the customer
requirements of the Voice of the Customer and the technical requirements. This is in the center of the House of
Quality. The interrelationship matrix is populated with symbols that indicate whether the relationship is strong,
medium, or weak. It helps to show the design team’s perception of the relationships that exists between the
customer requirements and the technical requirements. This is important because we want to make sure that
each Voice of the Customer requirement is addressed by a technical requirement, otherwise we’ve missed the
customer’s expectations. We can use relationships to make sure that there is a strong relationship between each
customer requirement and at least one of the technical requirements.

Now let’s take a look back at our manufacturer of dental surgical supplies to discuss how we would complete the
interrelationship matrix. We would start by determining what the relationships are in the interrelationship matrix
and then we would go back and look at our customer requirements. If we look at customer requirements of the
not strip, the customer requirement is that the pliers do not strip and it’s strongly related to Rockwell hardness,
vanadium content, and carbon content.

The chromium content has a medium relationship with the customer requirement for stripping. If we look at the
customer requirement of not break, carbon content and Rockwell hardness, both have a strong relationship to the
requirement that the pliers do not break. And the molybdenum content shows medium relationship; sulfur
content reduces toughness so it has a negative relationship. If we look at the next customer requirement of not
rust, this is a customer requirement that the pliers resist rust and it’s strongly supported by the molybdenum,
chromium, and nickel content. The surface finish and sulfur content have a medium effect on corrosion while
carbon content has a negative effect on corrosion protection. The next requirement is be shiny. When we look at
this one shine is strongly related to the surface finish with a medium relationship to vanadium content and a
weak relationship to the chromium content. And the final requirement is that it is shock proof. The pliers-electro
conductivity is strongly related to the rubber thickness of the handles and it has a negative relationship to the
nickel content.

The next step in the interrelationship matrix is to replace the symbols being used for the relationships with the
actual numbers that represent their relative strength in relationship. It’s important here that the team decides on
the weighting system that they’re going to use. In this example, we’re using a 0 for negative, a 3 for weak, a 5 for
medium and a 9 for strong. Other commonly used weighting systems are 1 for weak, a 3 for medium and a 9 for
strong. And it’s important for the team to determine what weighting method they’re going to use because this can
amplify the importance and urgency based on the differing types of weighting systems. And now the team
changes the relationships from the symbols to the actual numbers. It’s important to note that, while we still have
several blank spaces, the blank spaces mean that there is no relationship between the customer requirement and
a technical requirement. Once a team completes the interrelationships, then they need to go back and interpret
those relationships to recognize certain patterns. For example, a blank column indicates that we might have an
unnecessary technical requirement. If a column is blank that technical requirement is not addressing any of the
customer requirements, which means that it might be unnecessary.

Blank or weak rows are an indicator that a customer requirement has not been addressed by a technical
requirement. In addition, if we have a row with a weak relationship to the technical requirements that might mean
that we have not adequately addressed that customer requirement. If we have identically weighted rows that
mean that we have two customer requirements that have been addressed the same way. We should either
consider combining the two requirements or going back and making sure that we truly understood what those
customer requirements are because they’re related to the same technical requirements. And so we might have
those two slightly confused or those customer requirements are not clear enough. A strong diagonal pattern is
created in response to the customer requirements, but it’s not something that we would expect to see, because
the technical requirements are brainstormed ideas and a strong diagonal pattern indicates that each customer
requirement was directly related to one technical requirement.

If we have a complete row, it’s another indicator that the customer requirement has a strong relationship to the
technical requirements, and this could be a sign that the customer requirement is costly because it’s linked to so
many of the technical requirements and, therefore, it needs to be tightly controlled. And if the customer
requirement does not meet the technical specifications, then this might also lead to a safety issue. In addition, if
we have a complete column, or a nearly complete column, this is an indicator that the technical requirement has
a strong relationship to all or most of the customer requirements. This technical requirement is important then
because it’s linked to so many of the customer requirements; therefore, it needs to be tightly controlled because if
that aspect of the system, the product, or the process that we’re designing fails, then it’s going to greatly impact
most of the customer requirements.

Determining Technical Priorities


The third step in developing the House of Quality is to determine the technical values of the competitor products.
When we talk about the competitive values of our competitor produces and the previous products, we are going
to use this as a benchmark for comparing the value of the technical requirements. For each technical
requirement that was identified, the team sets quantifiable measurements for their competitors and any previous
product versions. These quantifiable measurements help to highlight areas in the product or service that we’re
developing to ensure we target the right improvement. It also provides areas of the design that are already
considered best in class. Now let’s take a look back at our manufacturer of dental surgical supplies to see how
technical values of competitor products are developed. For this manufacturer, the team runs reverse engineering
tests on the competitor products against technical requirements that were developed by the QFD team.

In general the customer perception part and the competitor’s analysis would come from sales and marketing
feedback and competitive intelligence activities, but it’s important to have a deeper and more technical
engineering analysis of competitor products and services sometimes. This is performed by technical experts,
engineers, and research and development teams within the company in conjunction with the activities of sales
and marketing and company management. This helps the team to develop qualifiable metrics and estimates and
the information is then entered into the competitive benchmarks component of the House of Quality for each of
the technical requirements based on each of the competitor products. Now let’s take a look at how the priority for
each technical requirement is calculated. The QFD team first calculates the absolute technical priority of each
technical requirement. If we look at the first technical requirement of Rockwell hardness, we begin by multiplying
the prioritized requirement by the value and the interrelationship matrix. For Rockwell hardness we begin by
multiplying 4 x 9, which equals 36. Then we multiply 3 x 9, which gives us 27. The next three requirements of not
rust, be shiny and be shock proof do not have a relationship, so those values are multiplied by 0.

Then we would add up each of these values to get our absolute technical priority, 36 + 27 + 0 + 0 + 0 = 63, and
that’s how we calculate our absolute technical priority. Then based on the highest number, we’re able to prioritize
our requirements. The next step in looking at our technical requirements and prioritizing them is to convert the
absolute technical requirements into relative rankings. This is based on each value and how each of the different
technical requirements ranks relative to each other. The next step then is to determine the technical difficulty.
This is where the engineers involved in developing the product allocated a rank of technical difficulty based on
how difficult it will be to implement each of these technical requirements.

The process of project management involves planning, organizing, and motivating different resources involved in
achieving a specific task or working towards a common project goal. Within the purview of Six Sigma there are
various types of projects, such that each of the projects typically includes a team of six to eight people. These
projects may range from something within a specific department or other types of projects involve multiple
departments. Typically the projects for Six Sigma range in a timeframe of four to six months. Therefore, it
becomes crucial to tie in project management aspects to Six Sigma since, considering the varying types of
projects that might take into account different departments, together with the multiple team members involved
and with a fairly large project requires assurance that it stays on track and all the deliverables and deadlines are
met.

  “The project charter is defined a tool used in the Define phase of the DMAIC methodology to give the team and the
organization an idea of what is involved in the project.”
  Some of the key elements of the project charter include – the business case, problem statement, project scope,
goal statement, key deliverables, required resources, and roles and responsibilities. Project Charter helps the
team really establish what the team should be working towards and it gives everyone on the team a common
understanding of the focus of that project.

Business Case: The first aspect of the project charter is the business case in which case the team provides the rationale
for the project. This is also where the value that this project has on the organization should be clearly delineated. We are
trying to answer the question “why should we do this project?” One of the examples of a business case statement
is “Increasing our average sales per order by 8 to 10%.” This will increase our gross revenues to a level at which we will
be the clear market leader in the online auto market sales. The purpose of the problem statement is to describe the
problem that we’re trying to solve. Also it is important to describe the impact that this problem has on our business.
Another example of a problem statement is the “Customer returns of our alkaline batteries at retail stores are well above
the industry average of 0.7%, resulting in a direct loss of $450,000 and an indirect loss amounting to $600,000 per year.”

Project scope: The second aspect of the project charter is the project scope. The project scope provides a baseline of what
should be included in the project which assists to set the stakeholder expectations of what exactly the project is focusing
on and where they will see improvements. The project scope also helps to answer certain questions such as “What
processes are we addressing and at what level?” Example of a project scope statement is “The project will focus on help
desk operations at the company’s Philippines facility. The project team will review inbound customer support processes at
all levels over the month of July to determine ways to reduce customer wait time. Hardware and software infrastructure
and staffing schedules will not be examined.”

Goal Statement: The Third aspect of the project charter is the goal statement. This element of the project charter helps
the team to describe the anticipated results from the project improvement efforts. This is also where the team can outline
the quality focus and really highlight what improvements from the quality aspect the team is trying to make. This answers
the questions of “What results do we anticipate from completing this project?”, “How will we measure the results?” An
example of a goal statement is “The goal is to reduce the average cycle time to 7 days by July 30.”

Key Deliverable: Another key aspect of project charter includes key deliverables. The key deliverables help to provide
the roadmap for the team. This also provides estimation of completion dates so that the team analyze in case they are on
track or not. The roadmap and deliverables are typically aligned with the project phases including Define, Measure,
Analyze, Improve, and Control. We also want to make sure that any of the key deliverables within those phases are also
included on these with estimated completion dates. This will help the team answer “What are the key activities?” and
“When should those key activities take place?”

Team: Now the next aspect of the project charter is including the team and their roles and responsibilities. It becomes
very important to address the resource roles and responsibilities required within this Lean Six Sigma project. Therefore
when we provide a list of the team, it thereby includes their roles and responsibilities so that there is a clear understanding
of where they fit within the team. It’s also important to include information on how the reporting relationship structure
should be. The sole objective of including the team members and their roles and responsibilities is to answer questions
like getting to know who will be doing the work and the person assigned to which tasks and who they will report that
information to.
Project Problem Statement
The sole objective of the problem statement on the project charter is to describe the problem that the team is
trying to solve. The problem statement reflects why the team is using continuous improvement and quality
methodologies. It also defines the impact that this problem has on the business. By providing the information to
the team they can use this information to set meaningful goals for process improvement efforts. The project
statement also helps the team to focus their efforts in the right direction to ensure that they are working on the
right things. The problem statement helps to answer a few questions for the team. Also it addresses how big is
the problem, how is it being measured and what does it impact. Let’s illustrate the purpose of a problem
statement using an example – “Our current cycle time for order delivery is 9.5 days. This falls short of the customer
requirement of 6 days. This deficit has resulted in the loss of customers and $2.5 million in lost revenues each year.”

Features of a Project Problem Statement


•The rationale behind developing a problem statement is to provide clear and concise information about the current
business and the current situation, such that the statement should be quantifiable information about the problem itself and
its impact on the customer.
•It is essential that we pay attention as well to what our problem statement should not do.
•Clearly the problem statement should not state an opinion about what’s wrong; it should be based on quantifiable
information.
•The problem statement should also not describe the cause of the problem. So while going through a Lean Six Sigma
project, we want to make sure that we’re using root cause analysis to drive down to the true cause of the problem, and
avoid leading people in one direction with the start of the project.
•It is essential that the problem statement does not describe broad issues. We want to be as specific as possible so that we
get the team focusing on the same issues together.
•The problem statement should also not place any blame or responsibility for the problem. This goes back to Deming’s
philosophy of there are no bad people; there are only bad processes. We need to focus on improving the process and not
placing blame.
•The problem statement must make sure that there is no appearance of prescribing a solution. We want the team to go
through the Lean Six Sigma methodology to determine the root causes and then use this information to determine the
most appropriate solutions.

Now let’s take a look at how we can refine our problem statements to provide more information for the team.

Illustration – Let us say initially the problem statement was that our average on-time delivery rate is falling short
of customer expectations. Now such a problem statement doesn’t really give us much detail in terms of how
much we’re falling short, what our current delivery rate is. Now in this case, we can use that information, that is
fairly vague, and then use it to further refine our problem statement to the following – “We currently have an on-time
delivery average of 65% for Product A and 75% for Product B, which falls short of our customer expectations of 100%.”
Now such complete information provides us much more specific information in terms of what the products are
and where we’re falling short of our customer expectations and by how much.

Elements of a Project Charter

Purpose: Use this job aid to assist us in creating a project charter for the next Six Sigma project.

 
Instructions for use: We can print this document or recreate the table in a word processing or spreadsheet
application and use it to complete the project charter.
Using a template to create a project charter for a Six Sigma project can help ensure that all the necessary
elements are included.

Guidelines for writing a problem statement


When writing a problem statement, following things should be kept in mind,

•Define the problem specifically and measurably


•Use objective language; do not express opinions
•Focus on symptoms only; do not speculate as to possible causes
•Describe just one key problem; do not combine several problems in the problem statement
•Do not assign blame or responsibility for the problem
•Do not prescribe a solution

Process of scoping in Six Sigma Project


We can define project scope within Six Sigma as ‘the work that needs to be performed to fulfill the project’s
objectives’.  This is where we need to understand what is included in the project and what is not included in the
project. The purpose of project scoping is to let the team know exactly what is required and what is not required.
It also helps to tell the team what areas and what levels in the organization the project expands to. Project
scoping acts as a boundary around the project so we know specifically what processes are included, what
aspects of the project are included, and what are not. Also the project scope helps the team to focus on where
they should prioritize their efforts. We define the term ‘Scope creep’ as the event when we start moving outside of
the processes that we’ve targeted. To illustrate it further let’s consider a Six Sigma project where we specify that
our scope of the project includes a certain set of steps. But what happens sometimes within Six Sigma is that we
start finding other areas that would be nice to work on, or things start expanding because we don’t really
understand what our project focus is. Such a situation is defined as scope creep.
Now when we start adding additional deliverables, or requirements, or additional process steps, we need to be far
more careful. This lengthens the time it’s going to take for our project to be completed and it also adds in extra
steps that may not be related. So this adds an extra ambiguity to our process improvement efforts.

Two main causes of scope creep are –


•The first cause for scope creep is that the team tackles a bigger issue than what they actually planned for may be because
they didn’t have an appropriate problem statement. Since as a team starts getting into the project itself, it might face
bigger issues. In which case, the team needs to take a step back and see if the project needs to be split into two separate
projects.
•The second reason for scope creep is when the project Champion expects too much. This is where they may not have an
understanding of what’s really involved within the project and they’re expecting more deliverables. In this case, where the
project Champion and the project team need to go back to the project charter and analyze the goals and the objectives to
come to a common agreement.

With reference to Lean Six Sigma project, it’s essential to have the appropriate scope. Ensure that our scope is
neither too narrow nor too broad. Now, when our scope is too narrow, we are typically looking at a subset of the
process instead of looking at a wider range of the process. In such a situation, we won’t achieve the desired
improvement goals because we’re only improving a small section of that process. In addition, since we’re not
looking at the breadth of the process that we need to, we may miss the root cause because it could be outside of
that scope. On the other hand, if the scope is too broad, we might just miss the root cause since we’re trying to
take in too much information and at that point it gets more difficult to understand what the true root cause is.
Thereby impacting the project completion since it is trying to improve too much at one time, so it will take longer.
Also the process seems to be scoped out too broadly that can engulf and frustrate the project team since it
becomes all the more difficult to find root causes and to understand every step of the process entirely.

Four key management tools helpful in defining a project scope are –


Process Maps – The process map assists us to go through gradually by each of the steps involved in the process under
consideration. This clearly helps the team figure out the appropriate boundaries for the project.

Pareto Charts: The second management tools are Pareto charts. Now Pareto charts are helpful since it allows focusing
on the most important areas within the process or project under consideration.

SIPOC Diagram: SIPOC diagram, also referred as suppliers, inputs, process, outputs, and customer diagram, is
considered as a useful tool that documents the process starting with the supplier all the way to the customer, so we can use
this information to scope out the project by taking this information into consideration.

Voice of the Customer – The fourth and the last key management tool considered beneficial in project scoping is voice of
customer as it focuses on understanding exactly what the customer is looking for in terms of process improvement, and
thereby uses the information to scope the appropriate aspect of the process that’s impacting those customer requirements.
Defining Project Scope
In order to have a well defined project scope following features must be complied with –

•The project scope should address the source of the problem and it should focus on only one problem. In case we are
trying to focus on more than one problem, then they should be separate projects in themselves.
•The project must have achievable objectives. It needs to be realistic and not too broad.
•Ensure we are setting up the team for success, but also make sure that they are aggressive and achievable goals.
•Project should also be well scoped in such a way that it’s budgeted properly in terms of time, money, resources, and
capability.
•Make sure we are providing necessary resources so that the project can be accomplished and the team can be successful.
•A well-scoped project has consistent input from stakeholders.
•We need to make sure that we have frequent communication and feedback from our stakeholders along the way to make
sure all the expectations and needs of our customers and our stakeholders are met.

While developing and defining the project scope, it is crucial that we think about the customer and try to
understand their mind set. We need to understand what they see as value from our products and services. The
key question with defining our scope is what will create the most value for our customers and our stakeholders?
This is where we can focus in on the area and determine the appropriate boundaries to scope our project.

Some of the best scoping practices are –

•The first practice is to look at other projects to determine if there is an impact that that project might have on other
project. This ensures that there’s no overlap or waste because there might be other projects that are already addressing
items that we’ve included in our project scope.
•The second best practice is to set clear objectives. This can be done by communicating frequently with the stakeholders
and everyone that is involved in the team about the expectations, deliverables, and time lines.
•Another aspect is to focus on the financial aspects. This means that the project goal should be achievable within the
allocated financial resources. In case it looks like the project is going to run over budget, then we may have scoped it too
broadly. It is extremely important to try and prevent the project from crossing boundaries.
•While planning for a project we may face additional issues that could be addressed in associated projects. Therefore it is
a good idea to establish procedures to track the different child projects to prevent the current project from overlapping
with others.
•In addition, we need to be time conscious. The project should have start and end points, and the project goal should be
achievable within the available time. A project that overruns could be a sign of incorrect scoping of the project.
•Finally, it’s essential to write a precise and concise scope statement. The scope statement should set out specifically what
the project is expected to work on, and more importantly, what it should not be working on. This will surely helps in
avoiding some of the scope creep.

Illustrate
We now illustrate the process implementing these best practices to scope the project using a loan application
process.

During a meeting with the Champions of other project teams, the team discovers that a separate project is
already being performed that addresses loan data transfer issues. As a result, the team decides that they should
remove this from their project scope. Also, the project team leader has set out a very specific project definition
that includes deliverables, time lines, and clear expectations for the loan application project.

Two potential side projects were also identified by the team, in scheduling and data input areas. All these are
going to deal with by other project teams, which help the teams, ensure that they don’t cross over project
boundaries with these other projects. The team is also conscious of the time and financial constraints that they
have to work with and they’re making sure that they are scoping their project accordingly. March 30 has been set
as the deadline for project completion within the budget. Also the team’s scope statement specifically details the
processes covered by the project. It’s the entire loan process, which begins with a call from the customer and
ends with the acceptance or rejection letter being sent to the customer.
Process of using a Process Map
A process map can be defined as a way to help organizations understand the flow of information or products or
processes. Process map is considered useful since it shows us information in a graphical form so that everyone
can understand each of the steps in a process and thereby display how the inputs, the outputs and the tasks are
all linked together.

Within the scope of Six Sigma, the process maps are considered useful in scoping our processes. Now when we
consider a project manager that needs to be able to explain a project scope in a way that a wide variety of people
can comprehend what needs to happen within this process and what those boundaries are, the process map can
help because it displays the steps of the process, shows the various inputs and outputs and how everything is
linked together, and it has a clear start and stop to the process.

Purpose of using a process map


 
•Identify any disconnected steps we might have in a process because we’re showing the linkages between each step in the
process.
•The process map can be used to identify and clarify any responsibilities and relationships by going through each step of
the process to see who the owner is and how it relates to other steps in the process.
•The process map is also helpful in identifying non-value-added activities. In which case we can go through each step of
the process and determine if that step adds value to the customer or if it does not.
•Process maps are also used to isolate process bottlenecks, so we can understand where there is an interruption of flow
and where we’re building up queue or work in process within our processes. This assists in discovering opportunities for
improvement by identifying waste in the process, or bottlenecks, or disconnections. Then based on our opportunities for
improvement, we can determine what those appropriate corrective actions should be to maintain and enhance our system
quality.

Components and Symbols of a using a Process Map


In the process of representing a process map, several shapes are used, that are generally acceptable to denote
certain aspects of the process. Let’s take a look at a very simplified process and a partial process of a product
return process.

•Begin the process map with an oval, in this step this is where the customer is returning the product.
•We end the map with an oval to mark the final step in the process, showing a simplified version of the process.
•A circle is used for any activity that requires cooperation and also for inspection points. In this step of the process, a team
stops to assess the reason for rejection.
•Diamonds are used to show decision points in the process. In this point of the process, a team decides whether to accept
or reject the customer’s reason for returning the product.
•Squares or rectangles are used to represent a particular step or activity in the process. In this case, depending on the
outcome of our decision, whether or not we accept or reject it, the team will either return the product to the customer or
rework the product.
•Another dimension added to process maps is the concept of swim lanes. These are horizontal bands that are added on top
of the process map to show who has responsibility for each of these steps. These are very beneficial in Six Sigma to help
define scope of a project by seeing the stop and the end points, but it also helps to see where we’re crossing over different
department or functional boundaries.

Using a Pareto Chart for Project Scoping


One of the most useful tools for scoping the projects is the Pareto chart. The Pareto chart is a special type of bar
chart where the values are plotted in descending order. The Pareto chart is based off of the Pareto principle or
the 80/20 concept – that is that 80% of the effects come from 20% of the causes. For example, we could say that
80% of the value of a company’s inventory comes from 20% of its stock or 80% of the failures in a system come
from 20% of the defects.

The Pareto principle is one of the most reliable ways to identify the sources within a process that create an
uneven number of defects. Within Six Sigma and process improvement, these are known as the vital few sources
of the area we may be trying to improve. Also the Pareto chart is useful for analyzing non-numerical data such as
the cause of a problem. With Six Sigma, we can use the Pareto chart and the 80/20 principle to determine what
those vital few aspects or causes are and then scope our project so that we’re focusing on those key areas.

Usefulness of Pareto Chart


•Pareto chart is especially useful when we are trying to identify factors as a team that could be the most important for the
project. These are those factors that are the vital few areas that have the greatest cumulative effect in the system.
•Pareto Chart helps the team discern value, or in other words, what’s important and what’s not important.
•Pareto charts are useful for categorizing the problems. We can look at the problem from a business perspective, a
product, or a service, and categorize the problem that way for example.
•Pareto charts are useful when we are trying to prioritize problems to determine which ones have the greatest impact.
Then based on that information, we can prioritize to determine which needs to address first.
•Pareto charts can also be used to assess system changes by conducting a before and after comparison. They can also be
used when we’re analyzing data about the frequency of a problem.
•Pareto charts are useful when we’re examining particular items within a broad analysis.

Steps in creating Pareto Charts


•First step is to list all of the causes of the defects in descending order. We do this by categorizing the problems and
counting the frequencies of each of those defects in that category.
•In the second step, we calculate the cumulative percentage of defects. For this we start by determining the sum of all of
the defects from all of the different causes.

Let’s take an example we have three different causes that create the defect.

•First cause has 50 defects


•Second cause has 30
•Third accounts for 10 of the defects

Here, the total sum of the defects is 90. In the second part of step 2 is to find the percentage of defects that each
cause contributes to the total number of defects.

Percentage of Defects =  


So we get,

•Percentage of Defects First cause = 56%


•Percentage of Defects First cause = 33%
•Percentage of Defects First cause = 11%

Now the second and the final part of step 2 involve accumulating the contributions of each cause, and we do this
one by one. We start with 0% of the defects and then we’re adding in the contribution of each cause of the defect
starting with the most frequent cause.

•Since Cause 1 had 50 errors, the cumulative percentage at this point is 56%.
•Also Cause 2 contributed 33% of the causes, so now we add 56 + 33 and the cumulative percentage is 89.
•Then we add in Cause 3, which was 11%.

This brings the cumulative percentage to 100%. That last cause of the defect should always bring the cumulative
percentage to 100%.

Now the final step in the process of building the actual Pareto chart involves adding the axes and the numerical
scale. So if we go through and look at our scale, the first axis includes the number of defects. It is important to
make sure that our axis here will cover up to the highest number of defects we have for each category. Then in
the far right axis, we have the percentage and these ties into our cumulative percentage that we calculated
previously. Then we would add our bars and our categories. Our categories are those defect categories and our
bars will indicate how many defects we have for each category. The next step in creating the Pareto chart is to
plot the cumulative frequency of defects. To do this, we start by drawing a scale on the right-hand side of the
chart, and that’s going to be used to measure the cumulative percentage. Since it is a cumulative percentage, it
should always run from 0 to 100%.

  The next step is to plot the data points to represent that cumulative percentage of each category. To do this, we
need to make sure that we are aligning the data points with the right edge of the corresponding bars. Then finally
we connect the points with the line and this is known as a cumulative percentage line. The last step in creating a
Pareto chart is to prioritize the vital few.
Two main methods within Pareto analysis are,


Fixed-percent method: Under this method we apply the 80% rule or the 80/20 rules. This is based on the philosophy that
all defects that combine to cause 80% of the defects are those vital few. This is typically the most accurate methodology
for determining what those vital few are.

Change-in-slope method: This method uses a cumulative frequency line, and it’s based on the philosophy that all causes
to the left of the line break are the vital few. This methodology is the most visual of the two methods. So now let’s take a
look at two examples. With the fixed-percentage method, we would find the point at 78.6, roughly 80%, and we would
select the top two. With the change-in-slope method, we would also select the top two.

Primary and secondary metrics


It is very essential to understand our project metrics as we work on the project charter and we start to develop
our own problem statement, and move into our goals and objectives which therefore help to essentially tells us
what we need to measure. Now the project metrics should be quantitative and important to the customer, so
these metrics should relate back to the Voice of the Customer that has been gathered. It is also important to
understand how we are going to measure the project metrics. We could use some of our tools such as our
process flow diagram to determine where we should measure it.

The project metrics is important since it should be used to measure the progress of continuous improvement
efforts and then compare them against our final output to ensure that the target meets the expectations. The
project metrics is also important as they help in identifying problems within the processes and identify short falls
in meeting our goals. This information will be then used to drive towards the root cause analysis.

Illustration
We consider a project focused on reducing customer order to delivery cycle time. We shall now consider how we
can develop the appropriate corresponding metrics.

Given Project requirements – 


•First requirement is to trim down the order to delivery time such that our metric here could be cycle time since it is a
measure of the time.
•Second requirement is to have correct customer addresses. The project metric here then could be customer address
errors.
•Third requirement is to streamline the process. In which case the project metric here could be lead time or value-added
time.

It is very important to note that whenever there is a requirement so while developing the metrics each of these
three metrics should be quantitative and they need to directly relate back to the specific requirement. Also there
should be a clear linkage between the requirement and the project metric.

Types of Metrics

Primary Metrics: Primary metrics are the metrics that could be directly observed, influenced, or changed. In which case
there is something where we are actually getting a reading and we have quantitative data based on what we are changing
or what we are observing. The primary metrics is directly connected to the project goal. This means there is a clear
linkage between what is being measured and the project goal. Clearly, these are tangible and objective results that help to
provide proof that there has been a change in the system or the project or the process.

Secondary Metrics: Secondary metrics that influence the primary metrics such that these metrics that we cannot directly
measure ourselves. Secondary metrics are used to track trends or other intangible aspects of a process or change and also
there is no direct observation to measure this metric, therefore we have to use circumstantial evidence.
Let us illustrate this further by looking at some primary metrics and secondary metrics. For instance if we are
trying to reduce the cycle time that it takes to order a product through our online retail store, the primary metric
would be the order cycle time. Now, if we aim to increase the traffic through our online store, we could use a
primary metric such as the number of web site visits. On the other hand a secondary metric with reference to an
online retail store would refer to the number of backorders. It is not directly related to our cycle time, but it would
give us information that’s related to cycle time. With an objective to increase the number of customers visiting
the online store, a secondary metric could be the customer satisfaction ratings since this would be an indicator of
customers that are satisfied with their delivery time. Clearly it is not a direct measure, and we would want to
focus more on those primary metrics as they are more related to the project goals.

Illustration
We now consider few other examples to illustrate of primary and related secondary metrics. A primary metric
could be returns or scraps, number of complaints, defects, and delivery time.

•If we are looking at


returns or scraps and trying to reduce our external defects, a secondary metric that we could use to address this is
production costs.
•If we are looking at a primary metric of
customer complaints, we’re trying to improve our customer satisfaction and reduce the number of complaints. Therefore,
a secondary metric could be our customer satisfaction scores.
•If we are looking at defects, here we are trying to
reduce the number of defects. Then secondary metrics could include cost, customer satisfaction scores, or revenue.
•If we are trying to improve our delivery time, the corresponding secondary metrics could be cost, waste, or process steps.

There is a third type of metric referred as ‘consequential metrics’ which represents negative side effects that
might result in making the planned improvements.

For instance, the primary metric might be having a higher input cost, now this could be addressed by choosing a
cheaper vendor, but then there might be risk of reliability and quality issues. This further leads to ‘consequential
metrics; such as increased inventory, increased scrap, or increased defects in the final products. It is very crucial
to pay close attention to these consequential metrics so as to ensure that we are realizing the overall gains from
the improvement project. Therefore while developing the metrics for the projects, it is important to make sure
that we look at the requirements and then go back and look at the Voice of the Customer. There should be a clear
linkage between what we are measuring and what we are trying to improve for the customer. For instance, if we
are trying to create a door, several questions should be asked before taking up the assignment. Such as what
must fit through the door, what clearance is needed is it meant to allow or deny easy access, which will be using
the door, what are the size and the ability of the person, and does it require a lock or a window. These questions
are suggested to ask and get answers based on the Voice of the Customer and their key requirements.

Planning Tools
Project management and the process of project planning are very important in a Six Sigma project since they
help in planning the projects and monitor the progress. Project Planning tools are important especially in the early
project planning phase since they help in identifying the required activities necessary to accomplish the final goal.
Project Planning tools also help in scheduling the necessary activities appropriately to ensure that the targets are
met and thereby monitor the progress according to these outputs. Project Planning tools are also used during the
project to monitor the progress of the process improvement project and also detect the difference between the
actual and the planned values, so as to analyze the areas we are running behind or running ahead of the projected
target schedule. A Gantt chart is one of the tools used throughout the project to help with planning and tracking
the progress and also controlling the overall performance. A Gantt chart helps in listing each individual task and
then provides duration for each of those tasks. It clearly lists the information based on how long it will take and it
also shows the relationships or dependencies where one task must be completed before another task. This
information can be then used to monitor the progress based on today’s date to see analyze if the project is on the
track or behind schedule.

  There are two additional planning tools used to help with planning and monitoring projects.

Program Evaluation and Review Technique (PERT): PERT methodology analyzes and represents all of the tasks that
are involved in a project and the time needed for each task. The primary goal is to find the minimum overall time.

Critical Path Method (CPM): The CPM methodology looks at all of the activities that are required in the project and the
time required for each task and any dependencies within each step of the process. The CPM chart looks for the longest
path which is used to determine what is critical. The critical path is the sequence of activities with the longest duration.
Planning tools are considered very useful for reporting project status and performance on the project to
stakeholders. The key aspect of a planning tool is its graphical nature. It really helps to simplify the reporting and
it gives the project status at a glance. So someone involved with the project can clearly see the current status. In
addition, if there any issues where the project team is running ahead or running behind, these variances are
clearly marked on the project planning tools. One of the most useful aspects is that they can easily be e-mailed,
uploaded, or shared. There are quite a few project management software tools available that help make reporting
within these tools very simple. Consequently, we can easily create and share reports and that helps considerably
with improving the communication within the team.

Gantt Charts
The most commonly used project management tools used for tracking and monitoring progress in a project is
referred as Gantt chart. Gantt chart is a planning tool developed by Henry Gantt in 1910. Gantt chart is a time-
scaled activity list which includes different activities required in a project and the time-span it takes for each step
in the process. It is very easy to maintain a Gantt chart with one key aspect that provides a clear picture of the
current status since we can see what the different activities are, where they should be occurring. So if we are
ahead of schedule or behind schedule it becomes easy to track and rectify. This helps the team to develop those
“what if” scenarios – what if we’re able to get this done ahead of time, or what if it takes longer than we’ve
expected? That way the team can see the impact on the entire project. Usually Gantt charts are created in project
management software and the information is presented in a table-like format.

 
Steps to create a Gantt Chart
•Define the project settings, such as its start date, end date and scheduling mode. The most common scheduling mode is
forwards from the project start date. In this mode the default is for tasks to start as soon as possible, which means that the
whole project finishes at the earliest possible date.
•Define the project calendar. This sets the number of working days in the week, the number of working hours in the day,
and so on.
•Enter or edit task names and durations.
•Set up a global resources list and assign resources to tasks. Although we can often define the resources as we need them,
it is usually quicker to start by setting up a global resources list from which we can then select resources to assign to the
various project tasks. See Including resources in a Gantt chart.
•Create links to specify the dependencies between the project tasks.
•Set constraints on the tasks as necessary.
•Make final adjustments to the project plan.
•Once the project has actually started, inspect it at regular intervals to detect potential problems or scheduling conflicts
and make any corrections required.
This is an example of basic Gantt chart. It shows tasks in a Security and Access Control project.  Tasks are
outlined in two sections.  Each task uses a yellow triangle to indicate the start date of the task and a green down
triangle to indicate the finish date of the task. Also shown on this schedule are the responsible sub-contractors
for the project (column labeled R-E-S-P).

Critical Path Method


The critical path is defined as the longest task sequence from start to finish, in project management. The Six
Sigma project teams is required to understand the critical path since these could be the tasks that must be
completed on time in order for the entire project to be completed on time. Note the critical path also doesn’t have
any flexibility. As a result, we must make sure that the project always stays on time because any critical task
duration increase means that the project will be pushed out past the final completion date.

Critical path analysis also helps to identify those activities that are critical to maintaining the schedule and the
interrelationships and problem areas between the different tasks.

Key steps in determining the critical path


 
•The first step is to break down the project into specific activities that are required to complete the project.
•Once the team understands what activities are involved, these activities are arranged into a logical sequence. This allows
the team to see the relationship between the different activities and if there any dependencies between the different
activities, and also any activities that could be done concurrently.
•In the third step, for each activity, the team should estimate the duration – total time it’s going to take to accomplish that
activity.
•In the final step the team plots a path from start to finish, so to determine the critical path. In an activity network
diagram, each event or each activity is represented by a symbol – this could be a square or a circle – and each step, each
activity, is also given a letter or number as a designation. Each of the arrows connects a symbol and indicates the order or
precedence of those activities that should be sequential.

Illustration

The duration of each activity is listed above each node in the diagram. For each path, add the duration of each
node to determine its total duration. The critical path is the one with the longest duration.

Once we’ve identified the critical path for the project, we can determine the float for each activity. Float is the
amount of time an activity can slip before it causes the project to be delayed. Float is sometimes referred to as
slack.
Figuring out the float using the Critical Path Method is fairly easy. We will start with the activities on the critical
path. Each of those activities has a float of zero. If any of those activities slips, the project will be delayed.

Then we take the next longest path. Subtract it’s duration from the duration of the critical path. That’s the float for
each of the activities on that path. We will continue doing the same for each subsequent longest path until each
activities float has been determined. If an activity is on two paths, its float will be based on the longer path that it
belongs to.

Using the critical path diagram from the previous section, Activities 2, 3, and 4 are on the critical path so they
have a float of zero.

The next longest path is Activities 1, 3, and 4. Since Activities 3 and 4 are also on the critical path, their float will
remain as zero. For any remaining activities, in this case Activity 1, the float will be the duration of the critical path
minus the duration of this path. 14 – 12 = 2. So Activity 1 has a float of 2. The next longest path is Activities 2 and
5. Activity 2 is on the critical path so it will have a float of zero. Activity 5 has a float of 14 – 9, which is 5. So as
long as Activity 5 doesn’t slip more than 5 days, it won’t cause a delay to the project.

Early Start & Early Finish Calculation


The Critical Path Method includes a technique called the Forward Pass which is used to determine the earliest
date an activity can start and the earliest date it can finish. These dates are valid as long as all prior activities in
that path started on their earliest start date and didn’t slip.

Starting with the critical path, the Early Start (ES) of the first activity is one. The Early Finish (EF) of an activity is
its ES plus its duration minus one. Using our earlier example, Activity 2 is the first activity on the critical path: ES =
1, EF = 1 + 5 -1 = 5

We then move to the next activity in the path, in this case Activity 3. Its ES is the previous activity’s EF + 1. Activity
3 ES = 5 + 1 = 6. Its EF is calculated the same as before: EF = 6 + 7 – 1 = 12.

If an activity has more than one predecessor, to calculate its ES we will use the activity with the latest EF.

Late Start & Late Finish Calculation


The Backward Pass is a Critical Path Method technique we can use to determine the latest date an activity can
start and the latest date it can finish before it delays the project.

We’ll start once again with the critical path, but this time we will begin from the last activity in the path. The Late
Finish (LF) for the last activity in every path is the same as the last activity’s EF in the critical path. The Late Start
(LS) is the LF – duration + 1.

In our example, Activity 4 is the last activity on the critical path. Its LF is the same as its EF, which is 14. To
calculate the LS, subtract its duration from its LF and add one. LS = 14 – 2 + 1 = 13.

We then move on to the next activity in the path. Its LF is determined by subtracting one from the previous
activity’s LS. In our example, the next Activity in the critical path is Activity 3. Its LF is equal to Activity 4 LS – 1.
Activity 3 LF = 13 -1 = 12. Its LS is calculated the same as before by subtracting its duration from the LF and
adding one. Activity 3 LS = 12 – 7 + 1 = 6.

We will continue in this manner moving along each path filling in LF and LS for activities that don’t have it already
filled in.

Program Evaluation and Review Technique (PERT)


Program Evaluation and Review Technique is a project planning tool such that the PERT chart uses charts to
sequence different activities. Just as CPM diagram, the PERT diagram shows tasks, durations, and
dependencies. These activities are time-scaled to indicate approximately how long each activity lasts. The PERT
chart and CPM both follow the same steps and use network diagrams to plan and schedule individual activities
necessary for a project. Both PERT and CPM is considered useful in determining the earliest and latest start and
finish times for each activity, but there are a few differences between the PERT chart and the CPM. The CPM
chart is more deterministic as we may have estimates of activity duration that are based on historical data or
information, whereas the PERT chart is more probabilistic such that we are using estimates based on uncertainty.
Therefore, it typically uses ranges to represent the probability that an activity will fall into that range. On a PERT
chart each activity node provides information on the duration of that activity.

•The top row, the number on the left-hand side is the earliest point at which the activity could start.
•The middle number is the duration, and this could be in days, hours, minutes, or seconds, or really whatever is relevant to
the project.
•On the right-hand side, this is the earliest point in which the activity could finish.
•Additional information is included in the bottom row of the activity node. The number on the bottom row on the left-
hand side is the latest start point.
•The number in the middle is any available slack time for that activity. Finally, on the bottom row on the right-hand side,
this is the latest possible finish point.

Then the activity nodes are joined together by arrows, and these arrows indicate any dependencies between
these specific activities. When developing the PERT chart, a key aspect of it is the time estimates. The team
would develop this information to estimate the duration values. Typically PERT charts are used on projects that
are nonrecurring on non-frequent, so we may not have sufficient historical data to determine exact values.
Therefore, the PERT chart uses probabilistic estimates; we’re trying to find the probability that the duration will fall
within a specific range.

Therefore, there are three-point estimates that are used in the PERT chart.


Pessimistic Time: First is the pessimistic time. This is the maximum possible time that would be required to accomplish a
task, assuming that everything goes wrong.

Probable Time: The most probable time is the best estimate of time required to accomplish a task assuming that
everything proceeds as we would expect it to.

Optimistic Time: The optimistic time is the minimum time that would be required to accomplish a task assuming that
everything goes better than we would normally expect it to.

Expected Time: This is the best estimate of the time that will be required to accomplish a task that accounts for things
not always going as expected, and realizing that sometimes things could go a little better. The expected duration in a
PERT chart is calculated by adding the optimistic duration estimate plus the most likely duration and this value of M is
multiplied by four because these are four times more likely to happen than the others. And then the pessimistic duration
estimate is added to this value, and then in order to get the weighted average of our time estimates we divide by six.

Illustration
Consider the process of calculate our expected duration to complete the Design phase of a project. If there are no
delays the project will take no more than six weeks. However, based on past projects the most probable estimate
is ten weeks. Then given the pessimistic estimate, if things really go wrong, and we understand how things are
linked to each other, the pessimistic estimate is 18 weeks. Then we’re going to divide by a value of six in order to
get the weighted average.

Solution: Expected Time = [Optimistic Time + (4 x Probable Time) + Pessimistic Time] / 6


= [6 + (4 x 10) + 18] / 6

= 64 /6

                                      = 10.67 weeks

Project Documentation
Within Six Sigma it is important to have strong documentation, as we move forward within a project we need to
understand all the changes that have occurred within the process so that anyone can replicate the same changes
in a different department. It is very helpful so that we can track specifically what those changes are and identify
the impact. Base documents help we understand what’s going on with the process. The ongoing project life cycle
type documents track what specifically has been done within the project. This is where we have the project-
related information.

A well documented file provides numerous benefits within a Six Sigma project

•It helps to provide that common understanding of the goals and the objectives for the project.
•It can also help to manage the process because we can see what’s been done and what still needs to be done.
•Documentation could also be used to trace the system path of the defects.
•It helps to demonstrate due diligence, because we’re showing what we’ve tried and what worked and what didn’t work.
•It shows all of the different possible paths of what the team has tried to implement for process improvement. This also
includes the lessons learned for future projects.
•It helps other teams look at the process documentation for this project to determine what they could also implement
within their areas and what should be done differently to help improve the next project.

When we begin a Six Sigma project, the project managers need to determine the amount of documentation
appropriate based on the project needs. In case we have too much documentation this can be a burden from a
bureaucratic standpoint of trying to keep track of it all. Excessive documentation can also get we buried in
information trying to figure out what we need, and this leads the team members to waste their time and effort on
non-essential things. On the other hand, if we have too little information, this could lead to scope creep because
we may have to further go out and look for information that we need. Further leading to miscommunication since
there’s not a clear set of information at one location and so we need to go out and get this information. Finally
leading to performance gaps by not understanding and having clear documentation on what level we are at and
where we need to be. Also insufficient documentation can also lead to duplication of effort since we may not
have the necessary information. This can finally cause system failure since we don’t have all of the information
needed.

Therefore it is very essential to maintain a balance between the correct amounts of documentation for any given
project.

List of questions that will help in determining if the document is required or not

•Does the documentation have a distinct purpose?


•Does it further the goals and objectives of the project or the organization as a whole?
•Is the documentation valuable?
•Does it support other documentation?

Four main types of document categories are –


Status Reports: This report considers the health of the project and any plan versus performance criteria. In which case
we are trying to identify potential problems, risks, or threats.

Management Review: The next type of document category is management reviews. These are typically from a higher-
level; system-level performance so as to understand what is going on within the organization and gets that perspective for
different improvement opportunities. Management reviews are generally conducted by higher-level senior management.

Budget Review: The Budget review considers resource utilization and cost performance, and documenting any deviation
or risk based on that.

Customer Audits: The last document categories are customer audits. Customer audits helps to ensure whether we are
conforming to the agreed-upon requirements and meeting our contractual obligations, so we can limit our exposure to any
liability.

 Stakeholders desire timely information. Therefore, there are several key types of documents that stakeholders
need. Some of the most claimed documents by the stakeholders are –
•They require progress reports and status updates as these could be with respect to schedule and budget.
•They also need a current inventory list, cost variances, earned value reports and change requests. This helps in ensuring
that the stakeholders have the appropriate information that they need on a timely basis, and then they could provide
appropriate and timely feedback to the team.

It is suggested to document the necessary steps throughout a process improvement project and communicate
what has been happening within the projects. This can be done using a storyboard. A storyboard is typically
something that is presented either in a software program such as Microsoft PowerPoint or created as an actual
physical display board. It presents different types of information throughout each phase of the project to further
communicate what’s been going on with the project, not only to the team, but other people that might be involved
with that aspect of the process.

Also we could use the Define, Measure, Analyze, Improve, and Control phases and their key aspects to show the
story. This means,


Define Phase: When we look at the Define phase, we could show the problem statement, goals and the business case, the
project schedule, and a SIPOC diagram.

Measure Phase: Within the Measure phase, we might want to start showing more of the information that’s specific to the
process, such as maps and Pareto diagrams. We could also include any data collection and then any graphical presentation
of that data, and potentially fishbone diagrams.

Analyze Phase: Within the Analyze phase, we start to understand what those key factors are in our process. So here we
could show information such as correlation charts or any of our theory testing that we’re doing.

Improve Phase: Within the Improve phase, we start to make improvements based on those key variables. We could show
information such as experimental designs or the pay-off matrix or how we’re implementing our solutions and what those
roadmaps look like.

Control Phase: Once all improvements have been made we could show things such as information tracking, our project
results, and our performance and any lessons learned that we might have.
This provides good information to the team and anybody else that the project team has been working with on
accomplishing these improvements. It’s important to note that other software applications can also be used to
help with documentation and reporting.

Risk Management in Six Sigma


All through the Six Sigma projects, we encounter several different types of risks such as technical risk,
operational risk, financial risk, environmental risk, or involve personnel or the process. As members of the Six
Sigma team, one must understand what kind of risk could impact our project from moving forward. The process
of risk analysis involves identifying and assessing risk factors that could impact the process or the project. Once
the potential risk factors have been identified, we should then determine their probability of occurrence, as we
can use that probability of reoccurrence to identify assets that we need to protect. This means in case of high
probability of reoccurrence, we need to focus more on those areas to prevent them from happening. This helps
us to identify the necessary and appropriate preventive measures based on what could go wrong, what the risk
factor is, and the probability. Therefore it is required to balance the impact of the risk versus the cost of
prevention as a suggested measure. We want to make sure that we’re taking appropriate measures that are cost
effective to reduce the impact.

In order to record the identified risks it is suggested to use ‘risk register’ tool to record the risks that have been
identified and then perform a risk analysis. It is very crucial to understand the differences between risk register
and failure mode and effects analysis, or FMEA. FMEA looks at the failure of a specific process, where the risk
register records different factors that will affect the ability to meet the project or organizational goals.

Let’s describe the various elements that demonstrate the column headings that we might find in a typical risk
register.

•First we start with a tracking number that uniquely identifies each recorded risk, which helps the team reference and
monitor that risk.
•Then we cross-reference this risk to other related risks in the work breakdown structure. This helps the team to monitor
each task that might be affected by that risk.
•The third column is the date that the risk was identified or when it was added to the risk register.

These first three columns help the team track each risk and its context throughout the project life cycle.

•The next column is a description and this should be a brief and concise description of the risk.
•In the next column we add the cause of the risk to the risk register.
•The next information is the expected impact of the risk on the project outcomes. Now, this part of the register might be
further divided so that we can detail the impact of each risk on specific project outcomes.
                                                                            Fig: Risk Register Log

Project Severity
The next aspect in risk management is the severity of the project. Severity is considered as the extent to which
each risk could impact the project. This includes the likelihood of the occurrence of the risk. We could use the
terms like low, medium, or high to qualify the severity of the likelihood. The final column in the risk register
defines the responses that will be taken by the project manager or the team if that risk happens to occur. Now,
these responses may include accepting a risk if it is outside of the management team’s control or adjusting the
project plan so that it lessens the impact to the project. Now let’s take a look at a brief scenario to understand
what risk analysis looks like in a Six Sigma project. In the current situation, the customer satisfaction rates are
much lower than the company would like them to be. This is negatively impacting repeat business. Therefore, the
goal of this project is to improve customer satisfaction while maintaining a planned fee increase. The team
uncovers a risk that they need to interview frontline customer service representatives to examine current
customer-facing processes.

 
Problems in process of Risk Management
•The first and the foremost issue is that they feel that the customer service representatives may not be fully honest or
forthcoming with accurate information as a result of feelings of insecurity, guilt, defensiveness, or fear.
•Due to such factors in place the customer service representatives are not honest using full disclosure, and if we don’t
cooperate with the interviewers, then the data that’s collected will not be accurate.
•Leads to faulty analysis, poor solutions, and not actually solving the problem. Due to which the team rates the severity as
high since, if the risk is actualized, then their result is failure to meet the project goals. However, the team rated the
likelihood as medium, since they feel that most customer service representatives will be honest if the situation is handled
correctly. In order to mitigate this risk, the team plans to implement a training session with the interviewers to teach them
how to properly conduct interviews in order to increase the chances of obtaining honest responses and data collected from
the customer service representatives.

Determining Risk Probability and Assessing the impact


The sole objective of a Six Sigma team conducting a risk probability and impact assessment is to assign a
combined probability impact rating for each of the identified risks in the project. The assessment of the impact is
typically performed by a team that is made up of people familiar with the risk categories. Clearly these individuals
must have an experience with recent similar projects or be responsible for planning and managing the specific
project areas which might be impacted by the risk. Therefore, in order to ensure that the team uses standard
measures of probability and impact, it is suggested for the team to use a predefined, typically in the risk
management plan, or any that is already available in the organization. This assists the team in ensuring that
everything is measured equally across all of the risks. This ways the scales can either be ordinal – which are low,
medium, or high – or cardinal, which is a scale that usually is from 0 to 1. In general the six sigma teams prefer to
use the numeric scale because it’s easier to calculate and quantify the scores using the numbers.

We calculate the risk score in the following manner.

Risk Score = Likelihood that the risk will occur (which is the probability) x Effect on the project objectives (which is the
impact).
In case we are using an ordinal scale, then the risk would be in terms of low, medium, or high.

Illustration
If the probability is medium and the impact could be high. Then we’re using a cardinal scale, where we are going
to multiply the value of medium that the team ranked it (let’s say 0.5), times the impact, which could be 0.9 in this
situation.

Then the risk score is given by 0.5 x 0.9 = 0.45, which means that the overall priority would be high.

Project Closure
As we know that each project has a defined life cycle, and the project manager is held responsible for completing
the project ultimately. This suggests that the last stage of a project is the project closure. We say “project closure
occurs when all of the defined project tasks have been completed and the customer has accepted all of the deliverables
together with a signed off approval”. Project closure may also include activities to document the final outcomes of
the project in terms of the output and the fulfillment of the customer requirements. The project closure ensures
that all the stakeholders have reached to an agreement on the fulfillment of the predetermined project
specifications.
Some of the tasks involved in the process of project closure are –

•Ensure that all of the final products and services are delivered.
•Project result should be reported both internally and to the customer if appropriate.
•In case any of the open project contracts must also be closed at this point and in lieu of project closure, the project team
should be released from this responsibility.
•In case of any open financial issues need to be finalized.
•Everything learned from the project should be documented as a learning experience to make sure that other teams can
benefit from this team’s lessons.
•Any ongoing documentation needs to be completed and finalized to close the project.
•Before the project is officially closed, it needs to go through a final review.

Now, in a Six Sigma project implementation, before the project closure the last phase has to be conducted known
as ‘project validation’. In the process of project validation the project activities and resulting benefits are reviewed
and documented. In the process of project validation it is important to do an analysis of processes that went well
and things that went wrong and the reason for their occurrence. Project validation is considered as a valuable
learning tool with the intention to make sure that we can repeat successes and avoid making the same mistakes.
This also leads us into determining improvement opportunities for future projects. The final project closure is
considered complete using an official document. This final document of the project is typically called the ‘project
closure document’. The project closure document is actually validated once we have the signatures of the key
stakeholders. This allows the project to be formally terminated and it symbolizes the official end of the project. It
also passes the accountability for maintaining the process improvement to the customer or the functional
department within that organization.

Affinity diagram is defined as a way of organizing large number of ideas based on their natural relationships. In
general when a team performs brainstorming sessions, they come up with a huge list of ideas. But there needs to
be some mechanism or a methodology to organize these ideas. This is where the affinity diagram comes into
play. Affinity diagrams are typically followed by the team as a silent process where each team member puts their
ideas on sticky notes. Thereafter the team moves these different ideas around until they fall into categories and
this is when natural relationships become evident.

Uses of Affinity diagrams in process applications –


•Affinity Diagram can be used in all stages of the DMAIC process such that at any time there is a brainstorming activity it
would help to develop possible solutions or to start determining different causes or problems associated with the process.
•Affinity Diagram is useful since it helps the team to organize these different ideas into various categories.
•Affinity diagrams are particularly useful when organizing large volumes of data where those common themes may not be
as easy to see initially. This is where the team can move the different ideas around to form categories so that those themes
start becoming more prevalent.
•Affinity diagrams are very helpful in developing group consensus as everyone in the team is concerned in developing the
affinity diagram.
•Affinity Diagram also helps to stimulate creativity and brainstorming.
•It helps the team to look at what the familiar problem is, but look at it in a new way by taking the ideas that were
brainstormed and helping to group them.
•Affinity diagrams are very useful when the problem is not understood since it helps the team to brainstorm and come up
with ideas together.
•Affinity diagrams make it easy for the team to analyze how the details relate to the whole system and to each other
depending on these different natural relationships.

Affinity Diagram is also very useful when the problem is not clear or is considered complex since when we have
those large volumes of ideas, it becomes tough to have things just really standout. Using this methodology the
team can work around developing those common themes. Affinity diagrams can be more appropriate or more
useful than other tools, since we start with various ideas and then we build upon that to find common themes
rather than working down.
We shall now consider an example of an affinity diagram. In the given situation we have a Six Sigma team at a
retail bank that is working on a project to address and organize the voice of the customer for addressing the
credit card requirements. Here, the team decides to use the affinity diagram to organize its list of potential
factors that affect the credit card marketability. The team did a brainstorming activity to document the various
ideas on the sticky notes as illustrated in the diagram. Such that by moving these different sticky notes around
the team would able to develop specific categories. The diagram illustrates the High-level needs and the verbatim
voice of the customer.
 

Process of Affinity Diagramming


Suggestions for creating an affinity diagram

•Before starting it is very important to make sure that we have a team since we would want to make sure that we are
getting a wide variety of ideas during the process of brainstorming and grouping the different ideas into categories.
•The team should be cross functional with typically about four to seven team members from various areas to ensure a
perfect balance of ideas. This also helps in ascertaining that we are looking at the problem and we’re trying to solve it
from multiple perspectives.
•Most importantly each team member must have a basic understanding of the problem. Since we are not aiming to solve
the problem but trying to develop a list and subsequent grouping that will help to solve the problem.
•Ensure that the team is able to think outside the box because the aim is to have a solid brainstorming session to develop a
comprehensive list of possible ideas.
•The team members must look at the problem from multiple perspectives and outside of the box to make sure that we are
creating the affinity diagram in a way that’s going to be helpful to the team and to the project.

Steps in creating Affinity Diagram


•First step in creating the affinity diagram is to write a good problem statement which means it should clearly define the
problem itself and be as specific as possible with the problem and also ensure that we are not leaving or confining the
problem statement. This means the team should be able to understand the problem and be able to think freely about what
we are trying to solve. To illustrate this step further lest take at an example of a Green Belt professional working on a Six
Sigma project with the design department of a crystal figurine manufacturer. The team is trying to help in creating a new
product line. While creating the affinity diagram, they are trying to develop ideas for improving the product design
process that leads to the problem statement: “We need to improve the product design process.”
•The next step is to create the affinity diagram which starts by brainstorming ideas. As the team does this, they are writing
down various ideas on sticky notes and then they are randomly placing that up on a board. Several of the ideas include
thing such as do a failure mode and effects analysis and do a process analysis using SIPOC. At this point, the ideas are
just placed on the board and not in any specific order.
•The third step of the process is to start sorting the ideas into groups. Now this is done by the team placing the ideas into
similar groups without discussion. This is a silent exercise where the team members move each sticky note around to start
developing those natural relationships between the ideas and a finished diagram has the ideas grouped under the headings
and so for each group of sticky notes, the team would look at the group and develop the common theme and then label
that heading.
•The finished affinity diagram then has the various ideas, all placed under different headings. Such that the problem
statement is located at the top of the diagram.
Now, as the team is working on affinity diagrams, it is very essential to consider what makes it an effective exercise and
what doesn’t. While developing the affinity diagrams, we have several things that we should make sure that the team
does.
•Firstly we need to ensure that the ideas are easy to see from a distance and they need to be large enough. Therefore the
team should use fairly large sticky notes to make sure that the team can see it and they will use a dark pen or marker to
make sure it’s easy to read.
•Also allow plenty of time and in some cases, if this is a big project, it could be several days that might be needed for this
activity. This means that the team must have enough time to properly brainstorm and come up with enough ideas.

It is also important to use intuition and gut reactions. As a team we would want each individual to be able to
develop their ideas freely and then use super headings to combine groups when it’s time. This will help start
grouping some of the ideas together.

While developing the affinity diagram it is not suggested to –

•Determine the categories in advance since that limits what we develop as a team and what we think of.
•Ensure that we are not deliberate and don’t criticize during the brainstorming process. Since the idea of brainstorming is
not to be afraid but to suggest different things or suggest different ideas.
•Team members should not be allowed to talk while the ideas are being sorted. As we want people to be able to think on
their own and not lead the group by discussions.
•Also do not order the ideas before they are placed in groups, so that those natural relationships and that natural order
come out of this process.
•Do not over consolidate ideas, as we aim to have natural relationships. Since the aim is not to get down to the fewest
number of categories as possible but to find those natural relationships.

Interrelationship digraph is a useful method for showing the cause-and-effect relationships between multiple
ideas. While working in a team, interrelationship digraph is considered as a very important tool since it helps in
creating a relationship diagram and also organizes the different ideas based on their natural links. This method is
very useful between different aspects of a very complex situation and the team can use the interrelationship
digraph by first developing multiple ideas and then determining what the relationship is between each of those
ideas. As we go through the process, we must counter check whether we are going through the process and does
this idea cause or influence any other idea.

This is where the arrows are added as we are drawing an arrow from each idea to the one that it causes or that it
influences. Then we’re looking at the total number of arrows that are going into an idea and out of an idea and
we’re looking to see which ideas have primarily outgoing arrows or incoming arrows. The ones with primarily
outgoing arrows are usually the basic causes. This method helps to tell us what the leading issue is really and
then when we look at the ideas that primarily have arrows coming into them, these are the final effects. And these
might be critical for us to address as well.

Usefulness of Interrelationship Digraphs


•Interrelationship digraphs are useful as they help to explore the root cause of the problem.
•The relationships illustrated in the interrelationship digraphs helps in understanding the cause-and-effect relationship and
determine what those main causes are.
•Interrelationship digraphs also help the team to discover what those influencing factors are and what causes those effects
to happen.
•Interrelationship digraphs information can be used to evaluate the relationships between our ideas by again looking at
how many arrows are coming into each idea and how many arrows are going out of each idea.
•Interrelationship digraph is particularly useful when there are multiple cause-and-effect relationships. Such that we are
trying to understand what is happening within our process and what those relationships are.
•Interrelationship digraphs are typically used in the Define and Analyze phase of the project. Since in these phases, we are
trying to understand what the key factors are and the influential relationships.

Illustration: A general physicians’ group is experiencing a relatively high number of patient complaints regarding
the lack of returned phone calls following a patient visit where some kind of test was ordered. In particular, the
patients are frustrated that the promised call notifying them of the test results is either delayed or must be
initiated by the patient. The office manager of the group conducts a brainstorming session to generate potential
reasons for the lack of effective and timely follow-up calls. The group then takes the brainstormed list and
organizes the potential reasons using an interrelationship diagram.

The basic idea is to count the number of “in” and “out” arrows to and from a particular issue and to use these
counts to assist us in prioritizing the issues. In the interrelationship diagram above, “Overly optimistic promise
dates for follow-up calls” is a key issue and, of course, would cause patients to expect a phone call faster than
the group believes it can deliver it. However, do not summarily ignore or devalue the importance of issues with
few “in” and “out” arrows until we have verified empirically the influence of these issues.

Process of Creating an Interrelationship Digraph


Steps involved in creating an interrelationship digraph

•First step involves determining the problem. This requires the team to thoroughly understand the problem statement
which is clearly visible for the team.
•Next as the team understands what the problem statement is then they start putting their ideas on cards. For this the team
may use tools such as brainstorming to ensure that they have numerous ideas and that they have a wide breath of various
ideas that are impacting the problem.
•In the third step the team takes those cards and arranges them so that they can start identifying what the relationships are.
•Since it is important to have the ideas spread out as much as possible so as to analyze a relationship between each of
those various ideas. In which case the team identifies those relationships. In this process they add arrows to indicate the
causes and influences. Arrows start from the idea that is the cause and then go to what it influences.

Tips for creating interrelationship digraphs


•Ensure to use a spacious work area. Since we may have a large number of ideas, it is better to make sure that these are
spread out large enough so that we can look at each of the ideas and be able to draw arrows throughout all of the various
relationships.
•Very importantly ensure that the cards are easily visible. Each team member needs to be able to easily read each card.
•For creating the interrelationship diagram one should have team ground rules. The team ground rules could be everyone
should to be respectful.
•Ensure that this is a team activity and everyone is working together.
•Everyone must avoid criticism or negativity.
•The focus should be on the problem itself and working productively towards solving that problem. In this process, ideas
can be taken from another session or another event that used an affinity diagram. Thereby using that information to
understand those relationships.

In the process of drawing interrelationship diagram each of our ideas are put on a sticky note. Now with each of
the ideas we go through and compare each idea to all of the other ideas to understand the relationship. This is
where a circular format is very useful because we want to make sure that we can draw arrows appropriately.
Once our arrows are drawn then we can understand the relationships. This is done by capturing how many arrows
are going into that idea and how many arrows are going out of that idea. We then use this information to
understand the most critical areas to address. In which case, the ideas having the most outgoing arrows are the
causes, and then we consider how many ideas have the most incoming arrows. These are the key effects; these
are the things that the customer would see.

It is very essential to understand the key causes and the key effects. Where our key causes represents those
ideas that have the most number of outgoing arrows, these are the ones that we need to go after and we need to
work on fixing since they have a big impact on the rest of our ideas. Where on the other hand and our key effects
are those ideas that have the most arrows coming into them. These are typically what the customer sees so this
is where we can measure the impact based on what the customer is experiencing.

Tree Diagram is considered as one of the most useful tools used by the Six Sigma team that involves breaking
down different ideas or problems into finer details. The tree diagram derived it name of its structure. The Tree
diagram caters to the higher-level problem first which is then broken down into categories. Further these
categories are broken down into finer and finer levels of detail known as branches. This process of creating
categories and further sub categories looks like a tree structure. Some other names that we might have heard of
are systematic diagrams, tree analysis, and analytical tree or hierarchy diagrams. Such a name evolves because
of the hierarchy structure depending on the perspective we are looking at – vertically or horizontally. It is very
useful to the team because as the team goes through and understands the problem statement, it helps move the
team forward step by step in getting from general ideas into much more specific and detailed information.

Types of Tree Structure


•Convergent diagram: This is essentially the reverse of the tree diagram. It begins with the detailed information and then
groups that into more general information. So it moves in the opposite way to converge on that higher level area.
•Circular Structure: Then the second type of tree structure commonly used is a circular structure. This method takes a
circular approach to breaking down ideas. So, if we start in the middle, we break down the idea. There are also different
categories that are broken into other ideas. The branches form more of a circular shape and this can be useful while
breaking off initially into two or three different categories. It gives them a much more representative structure then.
Illustration of a tree diagram
We begin our problem statement as – “How can we reduce breakages in transit?”
We then break this down into two categories we can do this by improving our packaging and improving our
delivery process. The question is how do we improve our packaging? This can be done by employing skilled
packers and changing packaging materials. Therefore as we continue to ask this question we move to the right.
As we focus on improving our delivery process we can do this by changing our transport mode and by hiring
skilled drivers.
Benefits of using Tree Diagram in Six Sigma Projects

•These diagrams are very easy to use, read, and interpret


•It helps to get everyone in the team on a same page by showing the relationship in the linkages between the different
parts of the diagram.
•It also provides a very effective way of checking the details because we can go through each step of the process to
understand how the process works and then the relationship between those.
•It also promotes step-by-step thinking because we can understand at each branch how this happens. Also we can move to
the right in order to say, for this to happen what else needs to occur. Therefore we can break down each of these step by
step.
•It also highlights the complexity by looking at the number of branches and how everything is related to each other.
•The tree diagram is a very useful tool within the Define phase so that everyone on the Six Sigma team understands
what’s involved in this project and what are the steps required with moving forward with the problem at hand. This
enables the entire Six Sigma team to have a basic understanding of the process that they are trying to improve.

This process helps the team to identify on that top level statement, what those key issues are and then further
break that down into the causes or factors.

Tips for good Tree Diagramming


•Define the process for breaking down the details.
•Person leading this event should have a short training session on what a tree diagram is and how the process works.
•Process of training should also cover process of breaking down from that high-level problem statement into each of those
further branches.
•Define as a team what is that criteria to understand and then determine when we have reached that lowest level.
•The goal should be to break down the tree diagram to get those key factors that we can work on and improve to reach our
process improvement.
•Must have no more than four items at any of the levels. In general, when we have more than four items this means we are
getting too far into the details and we probably need to have another category before we get to that.
•Make sure that the groups or the levels do not merge as we’re breaking down the structure and then we’re not going to
have those clear and direct relationships.
•After completing the tree diagram, it is crucial for the team to go back and review it. We want to make sure that
everything adds up from the higher levels to the lower levels and also backwards; that those lower level items add up to
the higher level items and also check for consistencies.
•Look for ways to improve the diagram and add up if there is anything that’s missing or unclear.

Process of Creating Tree Diagrams


We briefly define the steps involved in creating a Tree Diagram

•First we define the problem statement. This helps to make sure that each of the team members understands the purpose
of this exercise. The problem statement helps to communicate what the team should focus on. This gets everyone on the
team focused on working towards that common goal.
•We then take that information and break it down into the next level to start to identify the key issues. This involves
creating branches to understand the relationships and what causes these. This step helps the team to further breakdown
that top-level statement into the key issues.
•Then in the next step we further break down the tree diagram into subsequent levels. Here we can start to identify the
causes or those factors that are the key issues.
We now define in detail the steps involved in creating a tree diagram

Step one – Define the top-level problem statement


Three steps required to perform when defining a top-level problem statement –

•Carefully define the top-level statement as a clear statement to make it easier to break it down into its subcomponents
•Use a large workspace that all team members can see, and
•Use sticky notes or cards that can easily be moved and reused if the tree needs to be repositioned to maintain clarity

Note the top-level statements may be derived by using another tool, such as an affinity diagram.

Step two – Identify key issues


Eight steps to perform when identifying key issues

•Carefully define the question that will lead from each level of detail to the next to make sure it is based on and supports
the achievement of the objective
•Ensure each item has a direct relationship with the item it’s part of
•Include questions that will help ensure the items “add” up to items at the higher level of detail
•when developing criteria, be mindful that not going far enough may result in overlooking something important
•Remember that if more than four items are identified at the first level, it could be that some of the items really belong at
a lower level
•Continue the process for each successive level until no further breakdown is needed
•Remember that typically, each level is expected to have two to four sub-items; any more, especially at higher levels, can
indicate that some items have been placed on too high a level, and be careful to keep groups separate; if they start to
merge, rearrange the tree so that the relationships between levels are clear

Step 3 – Identify causes and factors


Three steps in identifying causes and factors

•Review the completed diagram and look for improvements


•Remember improvements will require the tree diagram to be rearranged, and
•Be sure to apply the process of breaking down levels of detail, the criteria for stopping, and the review for improvement
each time an improvement is made

Prioritization Matrix
Prioritization Matrix is another useful tool within Six Sigma projects. Prioritization matrix is one of the quality
tools that takes different and diverse items, and helps the team prioritize them based on specific criteria. After,
using the specified criteria, the team ranks each item based on that criteria and determines the total value in
order to rank and prioritize each of those items. The prioritization matrix is very useful tool in six sigma projects,
for instance, if we have several projects from which we are trying to select, or within a project we might have
different activities or process improvement ideas that we want to focus on, but we need to decide where we
should focus the efforts initially.
Let’s say for example if we have four different designs and we need to determine which design we should select,
we could prioritize those based on four different criteria – customer satisfaction, ease of implementation, impact
on scheduling and least additional training cost. Thereafter the team would rank those and add up the total for
each of those designs. The highest ranking design would be the one where we should focus our priorities.

Benefits of Prioritization Matrix


•The prioritization matrix is extremely useful tool partly because of its ease to use.
•Team can then develop the criteria that are important based on the Voice of the Customer, which can then be used to
prioritize different alternatives. It’s also useful in facilitating the analysis because we have very specific criteria that the
team has agreed upon.
•Prioritization matrix takes this information and displays it in a very easy-to-read table format that helps the team reach
consensus.
•Prioritization matrix should be used when we’re deciding on the next steps. Once we have brainstormed a list of potential
areas to focus on, this gives us a quantitative way for the team to decide what those next steps should be.
•Prioritization Matrix is also useful when we have complex or unclear issues since it helps the team to further prioritize to
focus the efforts and it links everything back to those specific criteria.
•It helps the team to achieve group consensus on where the priority should be because everyone in the team is involved in
ranking each of those different possibilities based on the criteria.
•Prioritization matrix is typically used in the Define and Improve stages of a project. This is when we are determining
what the project should be within the Define phase and then also in the Improve phase, this is where we’re trying to
determine which improvement actions a team should take.

Process of Creating a Prioritization Matrix


A prioritization matrix is mainly used to reduce the items to be prioritized to a manageable number, in
combination with tree diagrams, while deciding on next steps to follow, to prioritize complex or unclear issues,
and to achieve group consensus on priorities.

STEP 1 – Identifying the items and the criteria


The first step involves identifying the items and the criteria. It is very important to make sure that the team has
mutually agreed upon criteria as to how to measure each of these items and prioritize those.

STEP 2 – List the items to be prioritized on the vertical axis


Second step involves listing the items on the vertical axis. All these items that we are going to evaluate should be
against the criteria and prioritizing.

STEP 3 – List the criteria on the horizontal axis


The third step involves listing the criteria on the horizontal axis; this is what we’re going to be evaluating each of
the different items against.
STEP 4 – List the criteria on the horizontal axis
The fourth step involves scoring each item as per criteria and adding a weight associated to the importance of
each criterion.

STEP 5 – Total the Score of each item


The fifth step involves totaling the scores for each item.

•In the next step we add the criteria to the table as a column header such that these criteria should reflect the qualities of
the solutions that are required to address the issues.
•The next step in the process then is to add the weight of each criterion because some of the criteria are more significant
in terms of corporate priorities.

For instance, if we look at low cost, this is important to the corporate priorities and it’s rated fairly high since
keeping the cost low relates directly to a corporate goal. These weights should represent percentages and these
are taken into account when we talk about the base score. The next step in the process is then to score each
value. The scoring becomes the base value. This is multiplied by the weight to give the weighted value.

Each team scores each factor and multiplies the factor by the weight and then they add both values to the table.
For instance for availability of resources the base value is 3 which is multiplied by the weight of 0.5 which results
in a weighted value of 1.5. Then the team totals the scores for each factor to identify the top three.

Introduction to Matrix Diagrams


One more useful Six Sigma tool is the matrix diagram. The matrix diagram is defined as a quality tool that depicts
the relationships between two, three, or four items or types of information. Matrix Diagram is considered to be
very useful since it’s easy to create and also uses symbols, numbers, or letters to show the relationships between
each of the categories being analyzed.

Fig: Responsibilities for performance to customer requirement

Usefulness of a Matrix Diagram


•Matrix Diagram is useful when we are trying to assign project tasks as this shows the relationship between the project
tasks and perhaps the person with the appropriate expertise.
•Matrix diagram is useful when we are determining causes and effects of problems as we can show the relationship and
also the strength of the relationship.
•Matrix Diagram can also be used when we’re combining two tree diagrams into a single matrix. Such that it can be used
to highlight the relationships between those two tree diagrams.
•Matrix diagrams are typically used in the Define and Improve phases. And in the Define phase, it’s useful to show
relationships between the role and event that is going to occur in the project itself. In the Improve phase, when we are
trying to show what process improvements could be taken and where it’s going to be helpful within the process.
 
Types of Matrix Diagrams

T-shaped diagram: T-shaped matrix is useful when we have three groups of items. Though we only try to measure the
relationships between two of the groups and one group. We assume here that these two groups are not related and we
would be evaluating these groups of items against the same group. So we will be looking for the relationship between
those and not the relationship between the two.

Y-Shaped Matrix: The second type of matrix is referred as Y-shaped matrix diagram. It is primarily used to inter-relate
three groups of items such that each group is related to the other two groups. This means it is dome more in a circular
fashion to show the relationships between those groups.

C-Shaped Matrix: The third type is the C-shaped matrix. It’s used to relate three groups of items all together and is done
simultaneously. This tool is helpful as it shows the information in a 3D format.
The difference between the Y-shaped and the T-shaped matrix is that we are assuming there is a relationship
between each of the three groups. So now we would be able to measure the relationship between each of those
three groups.

Within a matrix diagram various symbols can be used to show the relationships. The first is we can have a
relationship that takes us from a strong positive to a strong negative. Therefore we have various symbols that we
can use to show that relationship. We can also use symbols to show the roles and responsibilities. For instance,
we can look at our suppliers, the customers, who are going to do the activity, and who owns the activity. Then we
can also use a Likert scale to rate it from least important to most important on a scale of 1 to 5.

Tips for creating a Matrix Diagram

•Consider what does not need to be known.


•Focus on what are those key important relationships that we’re trying to solve and that helps us determine the
appropriate matrix.
•Clearly define the symbols and in some cases, we may even want to make up their own. By using the symbols, this helps
us to relate back on how we can quantify where we should focus our efforts and then use this information for
prioritization.
•We want to make sure that we are focusing on the key issues only. So linking that back to the symbols and the rankings
help focus on where our teams’ highest priorities should be.
•We can use these rankings to understand the patterns and the relationships and then as a team further investigate any of
those interesting findings from our matrix diagram.

Process of Creating a Matrix Diagram


Matrix diagram is a tool used to show the existence and strength of relationships between two or more groups of
information. Several variations of the diagram exist, but the L-shaped matrix is the most commonly used. A five-
step procedure guides the creation of each type of matrix.

Step 1 – Define the objective and identify the items that need to be related
This involves clearly and accurately defining the objective of using the matrix, as this statement drives the
analysis and will be used later to direct activities deciding what needs to be related to achieve the objective, and
defining what does not need to be known, to help focus the exercise and reduce unnecessary effort.

The first step is to define the objective – and let’s take a look at this through an example. We have a customer
that’s a National Hospital Corporation. And they have recently noted an increase in patient wait times for surgery.
We look at this by starting to define the items that we’re going to relate and our team determines that in order to
meet the objectives of reducing waste times, we need to know which factors are contributing to the problem and
which actions a corporation can take to make the biggest improvement.
Step 2 – Choose the appropriate matrix type
Remember that information or analysis requirements will typically dictate the matrix that will be used.

The next step is to determine the most appropriate matrix type. In this step, the team needs to know which
contributing factors they should work on eliminating or which they should focus on improving first. Therefore the
team only needs to relate two groups of information. These are the causes and also the inputs from the surgical
units about the importance of each cause. Therefore the team decides to use an L-shaped matrix.

Step 3 – Determine how relationships will be depicted


Some relationship symbols are common, but other symbols can be used if they are clearly defined.

Next, the team needs to decide which symbols they should use to depict the relationships. In this example, the
team decides that the relationship between the causes and the long wait times at each surgical unit will be
defined using a scale that ranges from 1 to 10 to indicate severity.

Step 4 – Add information to the diagram


Focus on key areas, as a too long list will have a negative impact on the usefulness of the diagram.

Now, using this information the team creates the L-shaped matrix. The team then goes through using the various
relationships and ratings, and ranks each of the items based on the criteria. Using this information as a team,
they are able to review the matrix to look for pattern to relationships.

Step 5 – Analyze the matrix


Look for patterns or relationships that reveal areas where further investigation is warranted.

There are two important observations that stand out. The highest rating is associated with insufficient staff and
overall then the team is able to determine that the unavailability of the paramedical staff received the higher
rating. So now they are able to look at the efforts required to increase the number of paramedical staff members
because this will have the biggest impact. The next key finding was around the lack of communication with
physicians. This received the second highest rating.

Now, the team knows that improving communication between the surgical team and the patient’s referring
physician will have a significant and positive impact on the objective of reducing the patient wait times for
surgery.

Process decision program chart is an extremely useful tool in systematically identifying what could go wrong in
the Six Sigma project. PDPC is considered as a very useful tool with the teams as it can develop
countermeasures to prevent any potential problems that could go wrong. The Six Sigma team uses the process
decision program chart to revise their process improvement plan so that they can mitigate any risk or be ready
with countermeasures in case something does go wrong.

The Six Sigma team begins by identifying their objective with their Six Sigma projects and also their main
activities that are involved with each of these projects. With each of these activities, there could be potential risks
that are associated in each step. The process involved in the decision program chart helps the team identify
those risks and then develop the appropriate countermeasures. Let’s illustrate this with the help of an example
assuming that we have a Six Sigma team working at an auto parts manufacturer and they want to develop a new
process for their line of anti-lock braking systems, or ABS brakes. In which case the Six Sigma team uses the
process decision program chart to identify and assess the risk, and then use this information to develop
countermeasures.

For instance, with this new process, the team identifies one of the potential areas as lack of knowledge. As a
countermeasure they are going to incorporate training, simulation, and hire an external consultant. This helps to
mitigate any risk that’s caused by the lack of knowledge so that we can make sure that the new ABS process
moves forward without any problems. The process decision program chart is very useful in several instances.
First, as we start our planning process with moving forward with our Six Sigma projects, we can use this to
conduct a risk analysis so that we can have those countermeasures in place.

We can also then identify those alternatives in case something does happen. This helps us to be more prepared
and we can plan the appropriate contingencies in case the risks do occur. The process design program chart is
also very useful when the time constraints are very tight because if something goes wrong we don’t have the time
then to go back and make changes. Therefore if we planned ahead and we have contingency plans already in
place, then we can move on to that alternative plan quickly without having a tremendous impact on the project
time line. Therefore the PDPC is very useful during the Analyze and Improve phases of the DMAIC process
because this is where we’re spending most of the time, implementing actions for the process improvement.

Process of creating PDPC


We shall now consider the steps involved in the process of creating a process decision program chart.

•First step in the process is to clearly determine what is the objective i.e., what we are trying to accomplish. Then based
on objective to be achieved, specific activities are designed necessary to ensure that the objectives are met. In which case
the focus is on the high level key activities. Let’s consider an example. We are a leading a project team for an online
education provider that wants to improve the learning process for all of its learners. Therefore the objective is to improve
the learning experience. The team works together to brainstorm three different activities that could help meet this
objective. These activities include identify better resource materials, provide interactivity, and improve delivery.
•The second step in the process of creating PDPC is to identify the potential problems or risks that are associated with
each of these activities. What we want to ask here is “what could go wrong?” Or those “what if” type questions to identify
the potential risks that are associated with each main activity.
To illustrate this further consider the scenario of the online education provider, the team went through and identified the
risk associated with each activity. In terms of identifying better resource materials, the team identified the risk of
increased ramp-up time. In terms of providing interactivity, the team identified two potential risks, how they measure it,
and then they need to consider that this is different for each student. Finally in terms of improved delivery, the risk is that
teachers could resist the change.
•The third step in the process of creating PDPC is to determine what are the most suitable countermeasures or possibilities
for each problem or risk. Thereby developing and determining those countermeasures for each risk. The purpose of this
step is for the team to examine each action and identify also what is the practicality of each countermeasure, so as to
eliminate or replace where it may not be feasible. Therefore, we could potentially have areas where our countermeasures
are not sufficient such that the team must concentrate on those feasible countermeasures. Consider the same example of
online education provider, so in order to increase the ramp-up time, the team decides that they should add prep time to the
schedule. Therefore with respect to measurement, the team determines that they need new performance objectives and
since interactivity is different for each student, they also determine that they should have individual performance
objectives. To conclude, in terms of potential teacher resistance, the countermeasure is to conduct workshops.

Tips for developing effective PDPC


•Ensure that the team works in an environment such that they are respectful and they collaborate for the brainstorming
session.
•Ensure that everyone feels that they can contribute.
•During the process of development we go through looking for potential risk, in which case we need to keep asking those
“what if” type questions. This will help drive down to the potential risk and then also help to determine appropriate
actions to mitigate those risks.
•In case of any insignificant risks, instead of having the solid line, it is suggested to incorporate dashed lines that help to
indicate that relationship.
•It is suggested that the team could use symbols for rating the countermeasures that helps in prioritizing the
countermeasures as well.
•Considering the countermeasures, ensure to have a good evaluation criteria and typical evaluation criteria could be the
time or the cost that’s associated.

Activity network diagram is a quality tool primarily used to demonstrate the required order of specific task that
are required in the execution of a project or process. Some of benefits for using Activity network Diagram are –

•Activity Network Diagram helps the team to determine the best schedule for the entire project,
•Activity Network Diagram helps in determining any potential scheduling or resource problems which leads to
developing solutions.
•Activity network diagram can be used to identify the critical path which would help the team to move forward with
understanding what those risks are with missing any potential deadlines or if there are any delays and how those will
impact the entire project.

Several other advantages of developing the activity network diagram are –

•By showing all of the different steps that are required, it communicates quite a bit of information at a glance.
•Activity diagram also shows the sequence of tasks so that everyone can understand what each step is and what has to be
accomplished before it can start.
•It also demonstrates the concurrent tasks so that we can understand what’s happening at the same time. And it also shows
any steps that are interdependent upon each other.
•Activity network diagram also makes it easier to determine the critical path by looking at how long it takes for each step
and what’s required. In addition, it shows the duration for each task.
•Activity network diagram can be used to determine ways to get the project done sooner by understanding what can be
done concurrently.
•Activity network diagram also helps to describe the flow of activities within a process and this helps to get the team all
on the same page with understanding what has to be done for the process to occur.
•Activity network diagram serves as a communication plan and it helps to identify the risks to the stakeholders.
•Activity network diagram is typically used during the Define phase to document the entire project and what steps are
necessary to accomplish the project.
•Activity network diagram is also used in the Improve phase so that the team can identify the process improvement steps.

Process of creating an Activity Network Diagram


Primarily there are five key steps involved in creating an activity network diagram.

•First step involves gathering information in which case it would be required to understand critical information about
what we’re trying to diagram. For instance, we should know the scope when does it start and end.
•Second Step, requires identifying the activities in the project.
•Third step, once all the activities are identified, we need to go through as a team and sequence those activities so that we
understand step by step how the project will occur.
•Furth step, involves identifying activities that are concurrent. In other words, they can happen at the same time. Any
activities that can happen at the same time should be stacked so that we can highlight that relationship.
•Fifth and final step is to add network lines to show the sequence of events and how each step relates to each other.

Illustration
Let us suppose the team has been tasked to improve the process of building a house. The team lists the major
steps involved – everything from the excavation step through the landscaping step. In which case, the team
creates a chart – Activity Network Diagram – where the nodes (the boxes) represent the nine major steps
involved in building a house. Arrows that connect the nodes show the flow of the process.
Some of the process steps (nodes A, B, and C) run in series, while other process steps (nodes D, E, and F) run in
parallel. Note that Step B cannot happen until step A has been completed. Likewise, step C cannot happen until
step B has completed. Similarly step H cannot happen until steps D, E, and F have completed – and ALL need to
be completed before Step H. So, nodes A, B, and C are running in series. Nodes D, E, and F run in parallel. This is
important to know because those steps that are running in parallel most likely will have different expected
completion times.

Critical Path
The team’s job is to take note of which of the nodes D, E, and F, will be taking the most amount of time, and which
of those nodes is expected to take the least amount of time. This is essential when creating the Critical Path. For
instance, if node D is expected to take the most amount of time as compared with nodes E and F, it is not
important that nodes D and E start at the exact same time as node F. Those steps can start later, but they have to
be finished no later than the most time consuming of the three steps that run in parallel. The team evaluates the
nine steps and come to a consensus on how many days each of the nine steps will take. The critical path is a line
that goes through all of the nodes that have the longest expected completion times.

Most Likely Time


Nodes A, B, and C run in series, so the critical path is straightforward. Notice that between the three nodes that
run in parallel, (nodes D, E, and F) node D is expected to take the longest to complete as compared to the other
two nodes. The critical path would run through nodes D and G because those particular nodes have the longest
expected completion times. The line above shows the critical path. By looking at the Activity Network Diagram
the team can easily see that the expected completion time as defined by the critical path is 50 days.
(5+2+12+9+10+7+5 = 50 days) That’s the MOST LIKELY time.
Optimistic Time
The team might want to know what the best case (Optimistic Time), in terms of time, would be. To come up with
that number, the team would decide upon the shortest possible time for each of the nodes, and then add those
up. The numbers in parenthesis are the most optimistic times. (4+2+10+8+8+7+4 = 43)

Pessimistic Time
The team also might want to know what the worst case (Pessimistic Time), in terms of time, would be. To come
up with that number, the team would decide upon the longest possible time for each of the nodes, and then add
those up. Note: To determine the best case or the worst case, the critical path line must be followed. The
numbers in parentheses are the most pessimistic times. (7+3+14+10+11+8+6 = 59) Remember, we are only
calculating the numbers along the critical path when calculating the most optimistic and pessimistic times.

Expected Time
So what does all of this mean? It means the project most likely will take 50 days, but it could take 59 days, or it
can be done as soon as 43 days.

Expected Time =  =  = 50.3 days

Control Bands
We could calculate control bands around the average. Here’s how we do that:

Limits of Expected Variation =  =  = 2.7


For the critical path, we can expect the project to take from 47.6 days to 53.0 days
50.3 + 2.7 = 53 on the higher side

50.3 – 2.7 = 47.6 on the lower side

We all know that if we cannot or do not measure a process something then it cannot be improved by any means.
The role of performance metrics begins from here. As we go through the process of improving our systems with
Six Sigma methodology by reducing defects and reducing our variation, we would need performance metrics to
assist the overall process. Performance metrics are used primarily in the Define phase so that as a team, we can
determine the goal of the process improvement effort and all find out the most appropriate metric to be used to
measure the improvement. This is critical for the team so that they can understand the objective and the problem
they’re trying to solve. In addition, as we go through the rest of the DMAIC methodology, process performance
metrics helps in analyzing and tracking the project and whether or not there are any improvements in the right
direction or not. This will also help drive our project decisions. Note the Process performance metrics are critical
within any type of Six Sigma process improvement effort.

•Performance metrics can be used in banking in case we are trying to improve customer service or reducing errors.
•Performance metrics can also be used in information technology, we could look at how we’re improving our coding
process.
•Performance metrics can also be used in terms of customer service, as we could also look at how we’re trying to increase
our customer service satisfaction.
•Performance metrics can also be used within healthcare sector, such that our metric could be reducing defects and any
patient adverse effects.

Therefore it is essential to have performance metrics no matter in which industry we are performing the project.
When we consider sigma level, it is defined as the number of standard deviations, which is represented by sigma
that we fit between the mean of our process and the closest specification limit. When our process is capable of
Six Sigma that means we can fit six standard deviations between the mean of our process and the closest
specification limit. This means that with the distribution of data, the tails of our distribution outside of the
specification limits indicate the defects. Therefore at Six Sigma level 99.997% of our data fits within the
specification limits or in other words, that equates to 3.4 defects per million opportunities.

The primary purpose of Six Sigma is to reduce the number of defects and reduce the percent defective and which
is done by reducing the variation as we make every effort to get closer to nearly perfect process. Within Six
Sigma purview there is several performance metrics commonly used such as defects per unit, defects per million
opportunities, first time yield, rolled throughput yield, process capability indices such as Cp and Cpk, and then
cost of poor quality. Cost of poor quality is a way that puts our defects in terms of how much it costs our
organization. The purpose is to use these metrics to help define our problem. We will discuss about each of these
performance metrics in more details.

Measuring Process Performance


There are two different approaches suggested to measure process performance such that both of them result in
a sigma value.

Measuring Defects
First approach focuses on measuring defects such that defect is some discrepancy defined by the customer. In
which case, we should carefully understand the Voice of the Customer and customer’s expectations. Therefore if
we fail to meet those customer specifications, it reports a defect – i.e., any nonconformity to customers’
requirements as specified by the Voice of the Customer. Some of potential causes of defects are – faulty
processes, inferior inputs or defective machinery. Now we consider a service a defect could be putting the wrong
oil in a car during an oil change. In manufacturing, a defect could be a loose circuit in a computer and so it
doesn’t work all the time because the connection is not always made every single time. Thereafter we use this
information then to calculate the sigma level.
The first method involves counting defects. The easy part about counting defects is that it works with any type of
data – discrete or continuous. We just need to know if it meets the customer’s expectations or not. Now this is
done by examining the output of our process. We look to see if our product meets those customer expectations
or if it doesn’t; and if it doesn’t, then we have a defect. The useful aspect about counting defects is that it works
well with discrete data, so rather than always having an actual reading or a measurement on a particular aspect
of our product or service, we can use information just stating that whether it’s good or it’s bad. This is discrete
data. Then we can use this type of information to calculate the defects per unit, defects per million opportunities,
rolled throughput yield, and sigma value.

Measure variability of the Process


The second method for measuring process performance involves measuring the variability of the process
directly. This is done by looking at the process variability and using the information on the mean and standard
deviation and then compares to the specification limits. So when we talk about measuring process variability, it is
essential that we have continuous data and that we have a normal distribution versus counting defects, which is
only discrete information. In terms of understanding our process variability, we need to have continuous data.
Then we can use this information to calculate our process capabilities, the Cp, and Cpk. The information from our
Cp and Cpk values can be converted into a process sigma value using a table.

Defects Per Unit is one of the most commonly used metrics in Six Sigma. DPU is a metric that considers the ratio
of the number of defects divided by the number of units. Now, this ratio provides a good understanding of the
current functioning with the process and the ratio of defects are present.

Some of the benefits of using DPU Metrics are –

•DPU helps to provide a common benchmark in the process of trying to understand the current situation.
•DPU also gives we a baseline of where we currently are at with the percentage or ratio of defects.
•Also, when evaluating the current processes DPU, it gives a fair idea of where we should focus the efforts because we
want to prioritize wer process improvements efforts on the process with the highest DPU.
•DPU also helps we to identify future improvement activities so we can understand what a defect is before we make wer
calculation. Then based on that defect in the current baseline, we can determine what type of improvement activities we
need to take.
•Additionally, understanding what a defect is based on the Voice of the Customer, helps in identifying the current non-
conformances.

The defect per unit formula is calculated using two elements – the number of defects observed and the number
of units inspected. It’s important though, before we do the calculation that we understand based on the Voice of
the Customer what the customer requirements are; then, based on that, what constitutes a defect.

Defects Per Unit (DPU) = 

Illustration: Consider an example using a water bottling facility. It has been suspected that are encountering an
increasing number of bottles that are coming off the production line with caps that don’t seal properly. In order to
calculate their DPU, they determine that a defect in this case is defined as a cap that won’t seal. This ensures that
the team has a common understanding of what a defect is. They also determine that a unit is defined as an
individual bottle. Based on this information they were able to go through and count their number of defects out of
the sample that they’re pulling and in this case they find 19 defects in a sample of 200 bottles. They use
information to calculate the defects per unit.
DPU = 19/200 = 0.095
Next we consider parts per million which is a related measure to defects per million, and parts per million is also
a related metric to defects per unit.

Parts Per Million (PPM) =   x 1,000,000

Illustration: We consider an example of a fast food restaurant where the customer requirement or expectation is
set that they should wait no longer than three minutes for service. The manager observes that 15 out of 100
customers waited more than three minutes for their service; therefore, their parts per million, or PPM, for this
process is calculated by
PPM = (15/100) x 1,000,000 = 1,50,000

This means is that 150,000 customers wait for more than three minutes for their service.

Defects per Million Opportunity (DPMO) is a commonly used metric which has been built off the DPU and PPM
calculations. Now the difference here is that in this we would be calculating the mathematical possibility that the
process will be defective. DPMO standardizes the defects at the opportunity level, which enables us to consider
that there could be multiple aspects of the product or service that are defective, rather than just the entire product
or service is defective. Therefore it allow to compare the process at different levels of complexities because we
can take into account a more complex product or service that might have 30, 40, 50 or 60 or more different
characteristics for it. Moreover each of those aspects of the product or service is an opportunity to be defective.

Illustration
We consider an example to understand the difference between defects and defectives. We consider an example
of a pizza to see how this ties into play. In the case of a home delivery pizza company, there are different
qualities that could cause the pizza to be defective. These could be temperature, toppings, and the size, and we
can use these then to look at our defects per million opportunities. Let’s consider the first opportunity for a defect
is the temperature as it could be defective being too hot or too cold. The second opportunity for a pizza to be
defective could be from the wrong toppings that were used. Finally, the third opportunity for defectives could
mean that the pizza is bigger or smaller than what the customer ordered. This clearly indicates the difference
between defects and defectives.

Now we talk about a three sigma process, which was considered good enough before Six Sigma was invented; a
three sigma process has a defect per million opportunities of 66,800. As we move towards a sigma level of 6, our
defects per million opportunities drops drastically and when we reach a Six Sigma level our defects per million
opportunities of 3.4.

Process of calculating DPMO


Defects Per Million Opportunity (DPMO) looks at the opportunities for defects within units that result in products
or service that did not meet the customer’s requirements or their performance standards. Three main
components in the defects per million opportunity calculation are –


Defect: It denotes the measurable failure where we fail to meet the customer requirements based on the Voice of the
Customer.

Unit: Unit represent the final product or service that’s given to the customer.
•Opportunity: Since within each unit there could be multiple opportunities. Such that in each opportunity there is a
measurable attribute within that unit that could result in a defect.
When we look at the defects per million opportunity formula, it’s calculated by dividing the number of defects in a
sample by the total defect opportunities in the sample.

Defects Per Million Opportunity (DPMO) =    x 1,000,000


OR

DPMO =  x 1,000,000
Illustration: Let’s us consider that we are a Green Belt professional working at a stationary company and each
custom stationary order could have four possible defects. It could be incorrect, a typo, damaged or incomplete;
therefore, we have four opportunities.
Now 50 orders are selected randomly and inspected such that several defects were found. We found that two
orders were incomplete; one order that is both damaged and incorrect, so it has two defects; and we have three
orders that have typos error. Therefore, in total we have seven defects. In terms of total number of opportunities,
we had four opportunities, but we had 50 parts that were randomly sampled, such that the true number of
opportunities is 200.
We calculate our defects per million opportunities as,
DPMO = (7/200) x 1,000,000 = 35,000 defects
This means when we produce 1,000,000 orders such that we can have 35,000 defects in those orders.

Another important measure of performance within Six Sigma is Rolled Throughput Yield (RTY). Yield is defined as
a percentage of error-free products and it is considered as an important metric to use in situation when we are
trying to analyze the processes and identify problems as we are able to calculate how many products we start
with and how many good products are at the end of the entire process. Yield can be calculated using first time
yield or rolled throughput yield. The first time yield is typically considers at one specific process where a rolled
throughput yield looks at multiple processes. Let’s consider at first time yield, commonly referred to as first pass
yield.

Illustration – Let us say there are 1000 units going into the process and out of which 1000 units we have 100 that
are scrap; therefore, we end up with only 900 good units at the end. This means the first pass yield is 90% or, in
other words, 90% of the products that come into our process leave the process defect free.
First time yield, is typically used for processes or sub-processes that are made up of only one single operation or
step. On the other hand when we are considering multiple steps then we look at rolled throughput yield. In order
to calculate rolled throughput yield we need certain information for that calculation such as total number of
process steps and the first time yield for each process step since we’re going to multiply those by each other.

Illustration: Let’s say we have 1000 units coming into our process again we had 100 defects in the first step of
the operation. This gave us a first time yield of 90%; therefore, since we had 100 scraps we only have 900 parts
that are moving into step 2. Out of step 2, we had 90 of those that were scrap; therefore, we only have 810 parts
that are still good at this point of the process. Since we have 810 parts coming out of the process this gives us a
first time yield of 81%.
Clearly we would have multiple processes within this entire process and for instance, when we get to step 10 we
may only have 350 good units left that are coming out of that process that were good. Now when we look at the
entire process, we started out with 1000 units and we ended up with only 350 that were good at the end of the
process.
Illustration: Let’s consider another example of how rolled throughput yield would be used to give us information
we need on how to improve the process. We have three steps and each of them has their own first time yield. If
we look at those numbers initially for first time yield, they were well over 90%, so we would expect to have a fairly
good rolled throughput yield; however, based on how the calculation works, we will see that it’s actually much
worse.
Step1: We have 100 parts coming into our first step where there are five units that are scrap. This means that only
95 are going on to our second step and our first time yield for Step 1 is 95%
[(100-5)/100 x 100]= 95%

Step 2: Out of the 95 that are going into Step 2, seven units are scrap and this gives us a first time yield of 92.6%
and then we only have 88 parts that are good, that are coming out of step 2.
[(95-7)/95 x 100] = 92.6%

Step 3: Out of those 88 parts we have three that are scrap, which gives us, 85 at the end of the process that are
good. Step 3 has a first time yield of 96.6%.
[(88-3)/88 x 100] = 96.6%

Rolled Throughput yield is calculated as,

RTY = 0.95 x 0.926 x 0.966 x 100 = 0.85 = 85%


This is equivalent to the final percentage of products that we have out of the 100 that are good.

Process of calculating FTY and RTY


First time yield is calculated by dividing the units with no defects by the total number of units inspected and then
multiplying this number by 100.

FTY = 

Illustration: A magazine production house that involves several processes including manuscript writing, editing,
designing, printing, cover designing, and binding. In the process of magazine binding we find that we have 2230
that pass the first test out of the 2500 books, this means that we have 270 books with defects.
FTY = 2230/2500 x 100 = 89.2%

This gives us a first time yield of 89.2% that means approximately 89% of the books are defect free.
The rolled throughput yield is different from the first time yield since we are looking at multiple sub-processes or
multiple steps within the process. Therefore, in order to calculate rolled throughput yield we are going to multiply
the first time yield for several processes by each other.

Illustration: A jewelry manufacturer does rhodium plating. They want to calculate their rolled throughput yield for
a sub-process in its plating operation. The plating operation has three steps. The first pass yield for the first step
in the plating operation is 98.7% or 0.987. The first time yield for the second step in the process is 0.958 and the
first time yield for the third step in the process is 0.996.
RTY = 0.987 x 0.958 x 0.996 x 100 = 0.9420, or 94.2%

We can use the yield information to calculate the probability of a defect. When we talk about yield, yield is a
percentage of the product that has no defect. When we want to understand the probability that we have a defect
we can take the yield minus 1 and that gives us the percentage that is defective or the probability that there is a
defect. Then we can use that information because there is a relationship between the yield and the sigma level. If
we use the z-distribution table we can find the corresponding sigma value.
Illustration: If we have a yield of 90.49%, then when we take 1 minus 0.9049 to get the probability of a defect of
0.0951. When we look at that value on our z-distribution table and we follow it over to our z value we have a value
of 1.3. This is our sigma value, or sigma level. Therefore this process has a sigma level of 1.3. Now we can look
at what our yield values are as it relates to defects per million opportunities and sigma level. Now when we look
at a sigma level of 3.0 that equates to a yield of 93.3%, and defects per million opportunities of 66,800. As we
improve our processes to a sigma level of 6, we’re reducing our defects per million opportunities to 3.4 and our
yield becomes 99.9997%.
Process capability is a metric used to measure the variability and the output of a process. The two main
components in process capability analysis are –

•Specification limits: These should be given to us from the Voice of the Customers and they should be based on the
customers’ requirements.
•Process spread or process variation: It is basically the Voice of the Process that provides the process limits.
•Process Capability: With process capability we will try to understand how a process fits within specification limits and
also how it relates to a target.

Now when we talk about process capability there are two main process capability indices – Cp and Cpk.

•Cp is the ratio of the specification spread to the actual process spread. Because Since Cp is a ratio therefore the higher
the Cp value the better the process fits within the specification limits. For instance, if our process width is half the width
of our specification limits then the Cp value is 2.
•Cpk is the ratio of the same element, but unlike Cp, Cpk takes into adjustment whether or not the process is centered. We
want to understand if we have a non-centered distribution of our data. The higher the Cpk value, the better the fit for the
process within the specification limits. This also means that the process mean is closer to the middle of the specification
limit or closer to the target.

Process of calculating Process Capability Indices


Before calculating process capability two conditions must be satisfied –

•The first is that the process must be stable over a certain period of time which means that the mean and standard
deviation are consistent and do not shift over time.
•The second condition is that the data must fit a normal distribution.

The Process capability is calculated by subtracting the lowest specification limit from the upper specification
limit and then dividing by six times sigma, which is the standard deviation such that standard deviation is a
measure of the process width. This ratio tells us how much of our specification width is being taken up by our
process width.

Process capability = 

For Process Capability (Cp) there are no specific standards set for a good Cp value; however, most organizations
require a Cp value of atleast 1.33 or above. When we look at a Cp value of 1.33 or above, this means that the
process comfortably meets its specification limits and this also corresponds to 64 parts per million.

Interpreting Cp values
•If the Cp value is between 1 and 1.33, it means we are operating under tight control, such that our process is capable.
•If we think about a Cp value of 1, this means that our process width equals our specification width; therefore, if we have
any slight movement off of the mean due to variation then we will have a portion of the process going outside of the
specification limits. Therefore processes that are close to 1 must be closely monitored.
•If we have a Cp value of less than 1, this means that our process is not capable. Essentially this means that our process
width is greater than our specification width. Therefore we’re always going to have a portion of our distribution outside of
the specification limits.

Interpreting Cpk values


Cpk is considered as a long-term measure of process capability. The calculation of Cpk is very similar to the Cp
calculation but now we will be taking into account the centering within the data. Therefore the calculation takes
into account the mean of the data and then each half of the equation is half of the previous denominator, which is
why the denominator is now 3σ. In this calculation we’re looking to see how the mean of the process relates to
the target of the process and how close it is to each specification limit, which is why we’re taking the minimum of
those values.

Cpk = min (Cpu, Cpl) where Cpu =   and Cpl = 


If we look at process capability, the industry standards are higher now than they were in the past. It used to be
that the minimum Cpk value was 1.33, which corresponded to a value of 4σ, but with the introduction of Six
Sigma in the mid 1980s that value has risen and now the common accepted values for Cpk are 1.67, which
corresponds to 5σ or 2.0, which corresponds to a Six Sigma process.

Cost of Poor Quality is a quality metric used within Six Sigma process to understand and eliminate the various
sources of poor quality. Cost of poor quality refers to the cost of the product or service when it doesn’t meet
customer’s expectations. These are costs that are associated with products or services that are of low quality
and these come from activities or processes that do not meet the expected outcomes in terms of the Voice of the
Customer and the customers’ expectations. Typically, these costs are related to inefficiencies, whether they are
within our processes or by not understanding what our customer’s expectations truly are.

Cost of poor quality = Actual cost of the product or service – Minimum cost
Here we consider those costs that are directly related to not meeting the expectations initially.
Now, when we look at the cost of poor quality, we are capturing direct and indirect costs. There are specific direct
costs that are easy to capture such as rejections at the customer site, repair costs, rework or testing costs. There
are other indirect costs that are a bit more difficult to capture. For instance, if we think about the idea of an
iceberg, there are certain things that are easier to see and capture, but then there are other things that are harder
to see that are below the surface. These are typically our indirect costs. These are items such as loss of
reputation in the market, confusion, loss of motivation, late penalties, urgent or hurried deliveries, and customer
satisfaction. These are much harder to capture and determine the specific cause, but these are all costs that are
associated with a loss from not providing a product or a service that meets customer’s expectations.

Indeed, costs are very high in a non-Six Sigma process. If we roughly consider how costs relate to sigma levels

•A four sigma process equates to about 20% of sales revenues that are lost.
•A five sigma process equates to about 10% of sales revenues that are lost.
•Larger companies typically have about $8-12 billion per year in cost of poor quality.
Now, when consider using cost of poor quality as a performance metric it is considered a very important tool to use as a
baseline, since we can develop our goal for reducing our cost of poor quality using this baseline number. In addition,
when we develop a project charter in our Define phase of the DMAIC methodology, we can use the cost of poor quality in
the project charter. And then as we move forward with the DMAIC methodology we can use the cost of poor quality to
track our benefits and use this information after completing the projects to show the improvement. In addition, cost of
poor quality is a good metric as we are developing our criteria to select future Six Sigma projects.

Types of COPQ
Cost of poor quality is associated with the cost of not providing a service or product exactly as requested from
the customer. When we talk about the concept of cost of poor quality every organization is a bit different and,
therefore, the cost of quality will be a little different; however, for each organization there are three general
classifications of quality cost that are consistent across most organizations.

•Prevention cost: These are costs such as education and training before releasing a new product.
•Appraisal cost: This could be testing or inspecting a new service to make sure it meets the customer’s requirements.
•Internal costs: These are cost associated with scrap and rework that are internal to the organization
•External cost: These are costs that would be external to the organization such as a sales return.
Some of the examples illustrating the different costs of poor quality that fall under each category are
•If we look at prevention, this is our quality planning such as preproduction reviews, developing specifications,
performing preventive maintenance, and housekeeping within the organization.
•Some of the examples of appraisal are test and inspection, and performing things such as our quality systems audit. It
could also be safety checks or security checks.
•The examples of internal failures are those that are internal to the organization, so it could be substandard products or
engineering changes that have to occur because of mistakes or it could be from supplier problems or absenteeism.
•In terms of external costs of poor quality, these could be things such as product recalls or products that are returned by
late payments.

There are also some hidden costs of quality which includes those things that are more difficult to quantify, but
they still have a significant impact on the organization. Some of the examples of this could be unhappy
customers. In terms of the impact on our business these are customers that potentially will not return, so it’s lost
revenue for the organization. One of the additional sources of hidden costs of quality are unhappy employees.
This has an impact on how the organization operates and typically results in a high turnover rate. In addition, we
could have inadequate service. This is typically a result of unhappy employees or lack of training and education,
and that inadequate service results in unhappy customers and customers that are not going to return. In addition,
we could have internal problems. Each of these four sources are difficult to quantify, but they impact the cost of
poor quality. Now let’s take a look at how our cost of poor quality equates to other conventional Six Sigma
metrics.

Now if we look at the cost of poor quality at 30%; that equates to a sigma level of 3.10, a Cp value of just over 1,
and a yield of less than 95%. But when the cost of poor quality drops down to 20%, the sigma level jumps up to
3.55 the defects per million opportunity drops down to 20,000 and then our yield also jumps up to 98%. Finally, at
10%, our sigma level jumps up again to 4.6, the Cp value is 1.53 and the defects per million drops significantly to
1000 and our yield increases to 99.9%. When we cut our cost of poor quality in half from 10% down to 5%, our
sigma level jumps to almost 5, our Cp value increases to 1.66, our defects per million is cut by 75% down to 250
and our yield increases to 99.975%.

Types of Six Teams


For any organization teams are considered to be an integral part of the Lean Six Sigma implementation. In
general in a Lean Six Sigma implementation process these activities are going to be performed as a team.
Therefore the compilation team is very important to the success of the implementation. So when we look at
teams, we are getting together a cross-functional team, and by doing this there are two key benefits of team
work. Now, by having these cross-functional teams we bring in different aspects of the business. Therefore, the
organizational goals can be better fulfilled from a holistic perspective. Also, as we get each team member
involved, we’re helping to empower each individual on the team to be involved in the process improvement
efforts. There are four key types of teams that are typically used in Six Sigma projects. Now when we consider
these projects, it could be a Six Sigma DMAIC project, design for Six Sigma project, Plan-Do-Check-Act, Lean
Kaizen or Kaizen Blitz. It’s important in each of these teams to have good teamwork.

Four common types of teams in Six Sigma projects –

•Process improvement team: The first type of team is a process improvement team. Process Improvement teams focus on
improving specific business processes. With these teams their goal is to have immediate results and because of this,
Process improvement teams typically concentrate on solutions that are easy to implement and that way they can achieve
quick and immediate results.
•Quality team: The second type of team is quality teams. The purpose of a quality team is to improve internal efficiencies
that impact the output because this is what the customer’s experiencing. There are two key possible activities that quality
teams work on. These include improving a particular process or drafting a quality plan for an organization or functional
department.
•Ad-hoc Team: The fourth type of team is an ad-hoc team. The purpose of an ad-hoc team is to complete a project within
very defined or specific requirements. Because they have a very specific requirement, ad-hoc teams typically have a
limited lifespan since we’re focusing on that key defined goal. When we talk about ad-hoc teams they are typically
interdepartmental, cross-functional, or they deal with very specific stakeholders.
•Self-managed/Agile Team: The fourth type of team is self-managed teams or agile teams. Self-managed teams lead their
own efforts and manage their own projects. Therefore these require a high degree of collaboration and there is minimal
direction from management. The team leaders focus on guiding the team rather than directing the team under self-
managed teams.

Evolution of Six Sigma Teams


It is very crucial to understand that there are various stages of team development since our project dynamics are
going to change over time and they are also going to change over the lifespan of the projects. As the team
develops, the team members become more familiar with one another. They can start working more closely
together towards that common goal. The nature of the project work itself also changes as the project progresses
and this makes a difference in the team development. The key aspect of making sure that we have good
performance within our team is that we have good leadership. Good leadership helps to improve performance
over time and when we talk about the challenges of managing team dynamics, there are two key areas of
responsibility within the manager’s responsibilities. The stages of team development contain several key stages
which include forming, storming, norming, performing, adjourning, and recognition.

Stages of Team Development


•Forming: The first phase in the process of team development involves forming. When we get our Six Sigma teams
together, this is where we start to formulate the roles and responsibilities of the team member. We aim to bring the team
together and this is where the team leader needs to provide very specific direction and delegate the roles and the
responsibilities. In this stage the team is in the very initial stages of determining and understanding what their focus is on
the project. As the team starts to understand what they are going to be working on, this is where some of the conflicts can
arise because everyone has a specific interest in what this project is and this is where the team leader needs to coach and
mentor each of the team members to make sure that they are working together closely. This is also a highly creative stage
within the process where we start to truly understand what the problem is and start working towards it. So we need to
make sure that people can be creative during this phase but also doing in a way that’s proactive as a team.
•Storming: In the next stage i.e., storming phase, the members start testing their boundaries and learning how they are
going to communicate with each other. Within Six Sigma projects we bring together diverse backgrounds of people for
the team. Therefore the style of communication is going to be different and that can lead to confusion or issues that cause
these conflicts.
•Norming: The third stage of team evolution is norming, once everyone starts to understand what the problem is, and they
start learning how to communicate with each other, this is where the relationships really start to gel and the team members
start working together nicely. In this phase the team starts really working on the processes and understanding each other’s
working style. This is also where it’s important for the team leader to really promote the team activity and participate in
the team activity to get everyone working together towards that common goal.
•Performing: Now once everyone starts norming and really working together nicely, this is where the team moves on to
the performing stage. This is the most productive stage within team evolution because the team members are now unified.
They are effectively communicating with each other and working towards that common goal. Besides in the stage –the
team leader takes on more of a supervisory role and stands aside to let the team work together towards the process
improvement efforts.
•Adjourning: The fifth stage of team evolution is adjourning; this is basically the end of the project, this is where the team
dissolves, because they have accomplished their project goals. And at this stage of the project we can really have two
different aspects. We might have team members that are reluctant to let go over the project or we might have team
members that have lost interest before they completed all of their activities or tasks. Therefore it’s important to make sure;
we’re wrapping up the project and closing up any loose ends at this stage of team evolution.
•Recognition: This is the last and final stage, once the team has worked together and they have accomplished their goals
it’s important to recognize the team. This is where we want to make sure that team leader is giving feedback and they are
celebrating the accomplishments of the team. Now in order to do this it is extremely important to recognize what good
recognition is. We want to make sure that we’re providing positive reinforcement to the team and then also indicate
what’s really important to the organization as we recognize the team members and celebrate their successes.

Six Sigma Organizational Roles


In Six Sigma there is fundamentally an organizational hierarchy and different roles within that Six Sigma
hierarchy. Each one of these roles has a very unique responsibility in the Six Sigma deployment.

•Executive Team and Champion: We begin with the top level of the organization with the executive team and the
Champion. They are the individuals that provide the vision for the organization.
•Master Black Belt: As we move down the hierarchy, the Master Black Belt works with the Champion to select projects.
•Black Belt: The Master Black Belts mentor the Black Belts who are actively involved in leading the Six Sigma projects.
•Green Belt: Now moving down to the Green Belts, this is where process improvements are occurring within their own
jobs.
•Yellow Belt: At the bottom of the hierarchy are the Yellow Belts. These are the individuals that are participating in the
team and have a general understanding of Six Sigma.

Considering each of these levels on the hierarchy; the Master Black Belt, Black Belt; and Green Belt, are
considered professional designations. The executive leadership and the Champions consist of the Chief
Executive Officer, CEO, and other top executives.

We shall now discuss the roles and responsibilities of each of the team member starting from top till the end
Executive Leadership
Executive leadership is considered responsible for the vision and the implementation of Six Sigma. Their primary
role is to ensure that Six Sigma projects are implemented, where those projects are going to help the organization
achieve the long-term strategic vision.

Champions
Champions are essentially the power brokers. These are the individuals that help secure any necessary resources
for the projects and many a times these are the ones that sponsor the improvement projects. The Champions
work as a link between the executive leadership and the Master Black Belt to ensure that the project aligns with
the organizational goals. Therefore it becomes important that the Champion understands the corporate culture.
In general, executive leadership and the Champions, have a general understanding about Six Sigma and their role
and responsibilities to link Six Sigma to the long-term vision in the organizational goals.

Master Black Belt


When we look at the Master Black Belts and Black Belts professionals, the Master Black belts are considered as
the consultants to the team leaders. They work as a conduit between the Champion and the Black Belt to ensure
that appropriate projects are being selected. Master Black Belt also train and mentor the Black Belts. Therefore it
is important that they have a full and thorough understanding of Six Sigma and they provide mentorship and
guidance to the Black Belts. The

Black Belts
Now Black Belts are the project managers in terms of leading the Six Sigma projects. Black Belts are typically the
team leader for any of the Six Sigma initiatives. Additionally they also mentor the Green Belts. With the hierarchy
the mentorship rolls from the Master Black Belt who mentors the Black Belt, and then the Black Belt mentors the
Green Belt. Therefore the Black Belt plays an important role as well in making sure that the Six Sigma knowledge
is disseminated throughout the entire organization.

Green Belts
When we consider the next level with the Green Belts and the Yellow Belts, the Green Belts also operate as team
leaders but they’re typically working on projects that are directly related to their job function. And they’re using
process improvement tools to improve their job and their function.

Yellow Belts
Yellow Belts are assumed to have basic training and these individuals are actively involved in the Green Belt and
Black Belt projects.

Other Roles within Six Sigma Team


There are various other types of roles involved in a Six Sigma team which are equally important and include the
sponsor, process owner, coach, facilitator, and team member. Now each of the specified roles has a very specific
and individual responsibility, and depending on the size of the teams, some of the different roles might actually
be combined.

•Sponsor: The six sigma team consists of sponsor; in general there are executive sponsors responsible for the strategic
direction of the projects. They ensure that the business case is accurately articulated and that the project plan is developed
to make sure it meets that business case. Therefore they are directly linked to the strategic direction of the project and the
business case. Now these could be a functional manager or an external customer but it’s important to make sure this is
someone that’s typically the recipient of the benefit of the project. And that’s what we’re going to produce. In addition,
they often serve as the conduit to making sure we have the project resources. If we have a spot in the project where we
need maintenance help or additional resources, this is a person that would help we make sure we have those resources.
•Process Owner: The next type of team member is the process owner. The process owner is in general the functional
manager. Process Owner is a person that the team would directly work with to make sure that they have the functional
expertise. So, while the project team is working on improving a process, process owner is someone that they need to get
involved in the process to understand the current situation and the current baseline. The team works closely with the
process owner and with their employees to make sure that the project is being implemented perfectly. Also, it’s critical to
have buy-in from this person because once the team is completed their project. Process owner is the person who owns the
process. Therefore it is important to ensure that we have the appropriate buy-in to make certain that those changes are
sustained.
•Coach: Coach is considered important since this person helps to ascertain that the team understands the tools and the
methodologies. Some organizations might also be the Master Black Belt. Coaches are also referred as Six Sigma expert
assigned to help with any queries or help solve any problems. For instance, if there are issues with setting up a design of
experiments or a hypothesis test, this would be a person that would have enough of the knowledge of the tools and
methodologies to coach and mentor the team to make sure that they are using the tools and methodologies correctly.
•Facilitator: The next role is that of a facilitator. The facilitator acts as the quality advisor to ensure that we are meeting
the project requirements. Facilitator helps to keep the team members focused on the task at hand. They help facilitate
discussions and meetings to make sure that everyone stays focused on the core problem that we are trying to solve. Also
facilitators are responsible to observe the team performance. They are aware of the different stages of team development
and use that knowledge to understand the team performance and help facilitate better communication to move the team
forward in the right direction. Facilitators also recommend the necessary improvements so the team functions better and
the project moves along nicely.
•Team Member: Team Member is directly involved with the team and helps carry out the work of the project. Every team
member would fulfill a different function within the team and would have assigned roles and responsibilities to make sure
that appropriate tasks are being completed on a timely basis. In general, the team leader reports directly to the project
leader or to a functional manager. Therefore it makes sure that the responsibilities are being addressed proficiently and
timely. This helps us to promote good communication within the team.

Excessive Cohesion
Teaming refers to the conflict where there could be areas the team members are not getting along. Most
important is to identify and eliminate lack of dissent or there is excessive team cohesiveness. This might seem
very positive within the team that there is no conflict, and everyone is cooperating and there is good morale, but
this also might be signs that we not asking the right questions and we are not challenging things. The reason that
we want to make sure that we realize when there is maybe too much cooperation is that it can lead to poor
decisions and this could also be because assumptions aren’t being challenged. It is important to make sure that
we are driving decisions from data and making informed decisions. Therefore if we have insufficient data and
we’re not challenging those assumptions then we might just be going along with the status quo. We might not be
taking into account specific perspectives such as the customer’s perspective.

It is also crucial to understand the importance of playing the devil’s advocate – we need to ask those hard
questions. These are the important questions to make sure that we are going down the right path and ties into
the saying – steel sharpen steel. We want to make sure that we’re just not going along with everything and we’re
asking those hard questions. Some of the concepts that tie into this are – groupthink. In groupthink there is a
desire within the team for cohesion and it dominates over the individual will and the creativity of the team
members. We want to make sure that we’re not so focused on getting along and not hurting others feelings that
we’re still asking the right questions. When we look at groupthink, one issue is that the data or the evidence might
be ignored. We really want to challenge the team to come up with alternative approaches and solutions. So we’re
going after very aggressive solutions to really push the envelope. For this we need to encourage critical thinking
and reward individuality.

We also need to be careful when we see our team getting along too well that we’re not accepting opinions as
facts. Sometimes there is a desire to accept opinions of others as facts, rather than seeking the evidence
because that person might have more experience. Within Lean Six Sigma all of our decisions should be data-
driven. We want to make sure that we’re going back and looking at the data rather than relying on people’s
opinions. When we rely on other people’s opinions then we could have serious miscalculations. We want to
encourage the team to be as objective and critical as possible when we’re dealing with opinions and assertions.
We want to make sure that we’re making our decisions based on the actual data. Another issue that might come
up is that people rush to accomplishment. When we’re working on Lean Six Sigma projects, we’re typically
working on very aggressive goals. And we have team members that are working on full-time jobs that we might
be pulling in ad-hoc to the team.

Therefore they are quite busy and sometimes that desire for results might overshadow the need for appropriate
courses of the action. This is where the team may feel pressure to make progress and to meet those aggressive
deadlines as well. But we need to emphasize the quality takes patience, we need to go back to our project
management and time management skills to make sure that we’re allocating sufficient time to accomplish the
task that need to be accomplished for that project. We also want to make sure that the team leader is not putting
too much pressure on rushing those accomplishments. Finally, it’s important to talk about attribution. This is
when conclusions are formed based on inference, rather than on facts. Again this is where we need to go back
and rely on data to make data-driven decisions. Attribution can be very dangerous when we’re gathering
requirements for the project and it can also lead to poor decisions. When this happen the aspect or path that we
may consider is to ask those attributers to paraphrase their information. We want to make sure that the team
conclusions are based on verified sources and data, and not just opinions.

Meetings
Being a Six Sigma professional it is very essential to understand how to hold effective meetings and how to be a
good meeting facilitator so that we effectively move the team in the right direction. Team meetings are essential
for Six Sigma projects. Therefore, we need to understand what can go right and what can go wrong. Various tips
are there to avoid as well as handle some of the common problems if they do arise during the Six Sigma team
meetings.

•Floundering: The first type of issue is when a team is floundering. As a part of the Six Sigma team it’s important to
recognize when this is happening. This is typically when a team is struggling to make progress and move forward. This is
characterized by false starts or circular discussions since the team doesn’t necessarily have a good path forward. As a
facilitator we might also notice procrastination or an inability of the team or team members to make decisions. Some of
the solutions to handle floundering are to provide data or resources so that the team can move the project forward. In
addition we can also adjust the team responsibilities based on people’s weaknesses and strengths to help get the right
people in the right area of responsibility to help drive the team towards making those decisions based on the data. One
more issue could be that the team does not have a clear direction. Therefore one way to help with reducing floundering
and getting the team moving forward is to help clarify the expectations so that the entire team understands what their
ultimate goal is. In addition, it could be that there is a lack of communication and opening the channels of communication
can also help move the team forward.
•Digression: The second issue is digression. This happens when a team starts discussing subjects in team meetings that
aren’t on the agenda. These are also typically tactics that can be used by the dominate team members to move the
discussion into an area that helps them meet their own hidden agenda. Therefore we need to be careful with some of the
dominant team members and watch how they are leading the discussions. Digression is characterized by turning off topic
and so having somebody go off on tangent during the discussion or also distracting other team members with chatter. As a
Lean Six Sigma facilitator some of the solutions to handle digression include trying to control the participants without
inhibiting the energy or enthusiasm, but trying to keep them on course with the discussion. We can also obtain full team
agreement on the need for limits. One must focus on the discussions by simply bringing the discussion back to the topic at
hand or the agenda items.
•Tangents: One of the common issues is tangents. Also called off sides or off tangent. This happens when the meeting
lacks a clear purpose. Therefore it is more difficult for the team to stay on path and keep the discussion on task since there
is no clear goal for what the team is trying to accomplish. Such issues could also occur when the agenda is too loose or
nobody is really in charge or there is no clear team leader or facilitator. To handle such a situation it is important to set
clear objectives for the meeting so that everyone understands why they are attending the meeting and what the purpose of
the meeting is. We can also use a moderator to help reduce the number of tangents and keep the team on track. Another
method for handling tangents is to record deviations so that they can be addressed at a later stage. Many companies will
call this a parking lot where they keep a list of the side items that are not directly related but they do not want to lose sight
of them. So they put them on a list that they referred to as a parking lot or they park the discussion on that topic but they
can still come back to it later if it’s actually necessary for that discussion.

Several common Six Sigma team tools are there that help Six Sigma teams to reach decisions or solve problems
on a Six Sigma project. Some of the most crucial tools include – brainstorming, nominal group technique and
multivoting. The power of these three tools comes from the energy that they draw from the team dynamic. These
tools are influenced by the rational and social forces from the team members. The purpose of these common
tools helps to generate a large pool of new and creative ideas and then separate out the vital few items that are
important from that large pool of ideas.

Brainstorming
The first tool used is brainstorming. This tool is used when we are trying to generate numerous ideas.
Brainstorming is primarily an idea generation tool that uses freedom and creativity to develop a large number of
ideas and solutions to problems. The key to brainstorming is unlocking that creativity from the team.
Brainstorming is based on the premise that the quantity of ideas breeds the quality. From taking it down to those
vital few and the more ideas that are generated, the better that chance that good ideas will be found amongst
those.

To achieve the goal, brainstorming requires participants deferred judgment. Therefore the team members that
are involved should be free to express themselves without worrying about criticism from other team members
and without feeling pressure to self-sensor their ideas. We want those ideas to be as free-flowing as possible.
The process of brainstorming actually occur over seven different steps –

•The first two steps include identification and information. In these steps, what we’re trying to do is identify the problem
or the opportunity. We also want to set the goal for the session. It’s important to identify what we’re trying to accomplish
and then by establishing these goals, this helps the team members start thinking freely. We also need to make sure that we
provide the team that’s working on the brainstorming process with enough relevant background data and criteria that’s
relevant to the issue that we’re trying to solve.
•During the third stage that is speculation stage, bulk of the brainstorming occurs. After the main brainstorming session,
the team should break for a time during the suspension stage. When we’re developing the ideas that we’re generating. In
which case we want to ensure free flow of ideas without evaluation and without criticism. The participants should be
encouraged to contribute all ideas. We want to recognize even wild or impractical suggestions because they may spark
other original or valuable ideas so they’re equally important. We also want to make sure that we’ve a facilitator involved.
This would be the person that’s responsible for ensuring the session stays on track. And that everyone in the team is
allowed an equal opportunity or chance to participate.
•Fourth stage is about the suspension stage, this is where the team contributes their last minute ideas and then breaks to
consider what has been proposed and let those ideas settle. This is the time when the recorder would compile a written list
of all of the ideas to make sure everything is recorded. And again it’s important to make sure that we’ve got that time
where we can take a break and let the ideas settle to see if anything else comes up.
•During the evaluation stage this is where the team moves from quantity, that large number of ideas, to focusing on those
vital few to drive the quality of the ideas. The team should reconvene to establish acceptability criteria with regard to cost,
time, quality, functionality and scope and then they should rank those ideas. We want to make sure that the team uses
these criteria or whatever is appropriate to rank the ideas that were provided. This is where the team can then eliminate
those low ranking ideas to focus on the vital few.
•During the analysis stage, the team reviews the top few solutions and can check them and validate them against the
project data and requirements.
•Finally during the presentation stage, that is the seventh stage, the final report is prepared and it’s presented to the
customer or the principal stakeholder or any other relevant decision maker to make sure that we have the final approval.
Nominal Group Technique and Multivoting
Nominal group technique is a six sigma methodology that uses a more structured format than brainstorming. The
term nominal is used as the individuals involved in this team have minimal interactions so they’re really only a
group in name. With respect to a nominal group technique, it starts with a facilitator presenting a problem or a
topic to the team. Then the team or group members write down ideas on paper. This is done silently so there is
very little to no interaction among the team members. Due to which of this there is less self-censorship than we
have with brainstorming as each individual is writing down their own comments privately. The other aspect of
nominal group technique that’s different than brainstorming is that we don’t have the negative group discussions
and dynamics, and it doesn’t inhibit the idea-sharing as much.

Steps within the nominal group technique


•Problem Presentation: The first step in the process is where the facilitator presents the problem or the topic that the group
should be working on. It’s useful for the facilitator to do this with an open-ended question and it could be a question such
as “what are some of the ways we can improve the process?” Then the group has a brief discussion so that everyone
understands the issue before they move forward and they understand the goal of the session.
•Pen-down Ideas: The second step in the process is where the team members work privately to write down their own
ideas. They’ve had that brief discussion as a team to understand the goals and the direction. And now each person is given
several minutes in silence to individually brainstorm all of the possible ideas and write these down. This allows the team
to have sufficient thoughtful reflection to develop their own ideas.
•Read responses: In the next step of the process the facilitator asks each of the participants to read one of the responses
and they read their responses one at a time. This is typically done in more of a round-robin fashion where we have one
response per person each time and we go in the circle around the team to make sure that everyone presents their ideas
equally and they’ve heard each other’s ideas. As these ideas are read, the facilitator is recording these ideas on a flipchart
or on a whiteboard. This occurs until each participant has given several responses. During this process people can ask for
clarification and they can ask questions. But the general ground rule is that no criticism is allowed. We can ask for
clarification to understand and encourage clarification questions to make sure everyone understands the suggestion.
•Discussion: After all of the ideas have been written down each one is designated with the letter or number so that it can
be identified. Each idea is then discussed in the order that appears so there is no prioritization at this point. It is just being
given in a list as a team went through the round-robin exercise.
•Ranking Ideas: After the team discusses each idea, the team members are then asked to select a predetermined number of
ideas. In general the top five or seven ideas are ranked in descending order. For instance, we might allow each participant
to select those top five choices from all of the ideas generated during that session and then the facilitator would provide
each participant with five points to the choice that they judge to be the most important. This way the team members are
each ranking those top five ideas with five points down to one point.
•Recording: In this step of the process each ranking then is recorded. It’s usually done on a series of cards and we can
capture or calculate the scores from all of the participants. All of the scores for each idea are added together and it results
in a total score for each idea. Using the total scores, the ideas are then ranked according to the scores that they received.
The highest number of votes on an idea has a highest priority. At this point the team might decide to prepare a report
showing the ideas that received the most votes.

Multivoting
An additional tool in six sigma methodology involves convergent thinking tool to help prioritize options is
multivoting. It helps the team to take multiple votes to rank or narrow down a list of ideas, options or solutions.
The advantage of multivoting and nominal group technique is that we are driving towards consensus since each
team member is participating in the process. Multivoting starts with generating a list of ideas. Similar ideas are
then grouped by affinity and placed into groups. We’re looking for the natural groupings here.

Each group is an assigned number so that it can be identified. Based on these groups, each participant then gets
to choose one third of the items that they determine is the most important and then each participant can cast
votes for each item. Similar to nominal group technique the team then eliminates the items with the least number
of votes and the team can repeat this process until a specific number of ideas are reached. For example, it could
be that the team wants to determine what the top five ideas are. Multivoting, while similar to nominal group
technique, differs and that the voting is done as a group, whereas in nominal group technique it is done as a
private decision making process. Also while nominal group technique is useful for smaller list of ideas,
multivoting is particularly effective for use with large groups of ideas or long list of choices.

In Lean Six Sigma methodology it is very critical to have good team communication to ensure that team achieves
its objectives. The project teams we are pulling in typically cross-functional teams, therefore communication
might be a challenge. We will have team members that will differ in professional experience, status, propensity to
take tasks, their cultural background, personality, how they respond to rewards and incentives, and also how they
interact with other people. As the team leader, it is important to balance the individual nature of team members
with the appropriate delivery mechanism to make sure that the team members are effectively communicating
and they are able to convey the message appropriately. When we think about how we are going to convey our
messages, it is important to consider that team members need the right information and they need it at the right
time. Communication is an art. It’s an art of providing information in order to achieve a clear vision and a shared
meaning. Humans communicate all the time but we want to make sure that it’s being provided in a meaningful
way and oftentimes information is provided without meaning as it being shared. When we think about this, lack of
clarity is one of the primary causes of frustrations and ineffectiveness within our team.

As a Lean Six Sigma professional, the focus is to make sure that our team members understand the important
project processes. Also it’s important that as we do this, we’re being an effective communicator as a team leader.
Within Lean Six Sigma communication is a vital part. We need to be able to effectively communicate within our
team but we also need to be able to communicate effectively with the various stakeholders.

Some of the types of communication that we will be dealing with as a team leader are sharing the Lean Six Sigma
vision.

•We will also be educating stakeholders about the Lean Six Sigma subject matter content so that they understand what’s
going on with the project itself or with the process improvement efforts.
•We will also be sharing and discussing the results with business diagnostics.
•We will be reporting progress and conducting reviews.
•We will be delivering and communicating very technical information. But also with status updates, we need to think
about each time in terms of the information we’re trying to convey and who our audience is. When we do this, we should
design an ongoing awareness plan so that we have that constant open communication. Another way to do this is to
publicize the successes of the teams.

Several types of communication that are used on a team


•One-way Communication: In one-way communication, this is what happens when information is relayed from the sender
to the receiver. There is commonly an expectation of a delayed response. E-mail is an example of one-way
communication where there is delayed feedback between the sender and the receiver. Other forms of one-way
communication include memos and announcements. And this is where no response is expected. There are other examples
such as progress reports, status updates, and review feedback. When we look at one-way communication it’s good to use
this when we’ve got information that does not require an immediate response because there will be a delay in the
communication back. An advantage of using one-way communication is that it’s timely and it’s very easy to use.
However, the disadvantage of one-way communication is that, while we can confirm that we actually sent the message,
we can’t confirm that it was read or how it was understood by the stakeholders.
•Two-way communication: Two-way communication is a method where both parties are transmitting information. It’s
interactive and multidirectional. An example of two-way communication is using a telephone. When we talk about two-
way communication this is communication where we can react and respond to each other in real-time. And it’s
appropriate when we would like immediate response and when there is information that’s given that’s sensitive or it’s
likely to be misinterpreted. So it’s beneficial for brainstorming feedback and creative collaboration because we do get that
immediate feedback. Examples of two-way communication include face-to-face meetings, video conferences, or phone
calls. When we look at communication there are several communication tools that are important to understand as well.
First, when we talk about written, this is a form of one-way communication such as an e-mail blast. These are best suited
for messages that are casual and they are not sensitive to time. It’s ideal when we’re talking about information that’s
factual and sort of an announcement. Using e-mail however indicates to the receiver that the information is information-
based, routine, and uncomplicated.
•Voice and Video Communication: When we talk about voice and video communication, these are tool such as telephone
or Skype. These types of tools are appropriate when we have urgent or complex issues that need to be analyzed. It’s also
useful when we need clarification on issues or issues that don’t require documentation. In addition typically two-way
interaction indicates to the receiver that the message is important and it might be on a personal basis.
•Meeting Tools: The third type of communication tool is meeting tools. These are used when there is sensitive tasks that
are collaborative or complicated. Also if we’re trying to convey an emotion or handle a difficult situation, these are all
better when they are done face to face. And when we’ve the meetings this should indicate to the receiver that the message
is important. It requires commitment and it’s potentially influential. It’s important when we look at this that we’re
choosing the right tool. When we look at the volume of information the more information there is, the more one-way
communication might be the best. If we think about trying to deliver a considerable amount of content such as quarterly
performance summary face to face in a group setting, the individuals are going to miss out on information. It would be
much easier for the audience that we’re trying to convey this information to, to digest this information by reading a report.

When we talk about the complexity of the message there are richer forms of communication that offer multiple
methods to access that complex information. For example if we want the audience to capture nonverbal cues
such as tone and body language to fully understand the message of the meeting, then we want to make sure that
we are selecting the most appropriate communication tool because otherwise there is a good chance that the
message might get misinterpreted by the audience. Therefore delivering the message in a setting where we could
get rapid feedback is important to confirm they understand it. We also want to consider the nature of the
message and whether or not we need immediate feedback such as in a two-way setting or if video conference
co-location is possible or not possible.

The other aspect to consider is the degree of collaboration. Where there’s a high degree of collaboration, this is
when meetings are typically the best tools because sometimes technology can present a barrier to effective
decision-making and in those cases a meeting can be the best option.

Organizational Communication in Six Sigma


Under Lean Six Sigma another key aspect of communication is organizational communication.
This is necessary to make sure we are meeting the information needs of the project stakeholders. When we think
about organizational communication, a key aspect is to think about the internal customers versus the external
customers and how we are going to communicate effectively with those two groups. When we think about the
stakeholders they require timely communication and the right amount of information so that they are kept aware
of the progress and what’s going on within the projects such as how it’s going to impact them. It’s also important
that we provide the stakeholders with opportunities to give us feedback and input.

When we talk about organizational communication, there are three key types of organizational communication.

•Top-down Communication: There is top-down approach that comes from the executive team and upper management,
down throughout all levels of the organization. Top-down communication is used to influence through information. This
is typically done to remind employees about the vision of the organization, the organizational strategies and objectives,
and any policies that might have changed. Or they might need to be reminded of any potential developments within the
organization. Another aspect is that it can be used to deliver performance feedback for annual reviews and things like that.
•Bottom-Up Communication: Then there is bottom-up approach. That starts with the lower levels of the organization and
moves up through the levels of the organization to upper management and the executive team. Bottom-up communication
helps to keep managers aware of what’s going on within the business. It could be use to make upper managers aware of
the progress and performance over a Six Sigma team, any problems that might be happening within the Six Sigma team
or any suggestions for improvement. These aspects can also be used from a general standpoint of not just Six Sigma
projects, but also other things that are happening within the organization. With bottom-up communication; for it to
happen, it’s important that employees believe that the door is really open that we can come in and communicate with
upper management about what’s really going on within the organization.
•Horizontal Communication: Horizontal communication moves back and forth between team members or departments or
other groups within the organization. When we talk about horizontal communication, this is very important to drive
collaboration between departments and individuals. We need to make sure that employees communicate across those
functional boundaries so we’re breaking down the silos within our organization. This can be achieved through cross-
departmental committees, teams, or different task force that are pulling in diverse cross-functional team members. So now
let’s look at an example for each of these and how it relates to Six Sigma. When we talk about top-down practices this
could be the executive team within the organization providing information on the strategic long-term goals of the
organization.

We then take it down through the organization to help select appropriate Six Sigma projects to make sure that
there’s a link between the continuous improvement projects and the vision of the organization. When we look at
bottom-up communication these could be progress reports from the Six Sigma team in which they are providing
status reports on what has happened within the team and the progress that they are making to project sponsors
and the executive management team. This could also be the Six Sigma team asking for resources if they need
help with their Six Sigma projects. In terms of horizontal communication this could be as simple as a team
members talking to each other and providing each other the status updates. Another example of horizontal
communication is when the Six Sigma team presents their status updates in town-hall meetings or company-
wide meetings and people are able to share the best practices from one process within the organization to other
similar processes.

Process Modeling
As we all know the Six Sigma methodology consists of define, measure, analyze, improve, and control, or DMAIC,
methodology. Within the measure stage of the DMAIC methodology, we are going to assess the current state. In which case
we gather the baseline information and gathering understanding about how our process currently operates. In order to do
that, it’s essential to map the processes appropriately. In order to understand all of the steps involved, we consider
interdepartmental activities, and the various involved in the process. In addition, in the measure phase, we start by gathering
data. Once we gather the data, we then use that information to further analyze the processes. In the measure phase of DMAIC
methodology, one of the key tools is process modeling. Process modeling is primarily used to ensure that we thoroughly
understand all of the different steps, aspects, and individuals involved in a process. The process of process modeling involves
making a visual model of the process which is very useful in a Six Sigma project implementation. As it gets all the team
members involved in understanding the process the same way. Additionally, process modeling is performed so as to
understand the current behavior of the process and how that compares to the desired behavior. By understanding the current
versus the desired behavior, we can perform a gap analysis. In order to identify key areas for process improvement in later
steps three key types of process modeling results –
• Descriptive results: Descriptive results involve having the information that traces what actually happened. This is from
where we get the baseline information. To understand what the current process is and how it’s operating, so we can see
the process as is.
• Prescriptive results: Prescriptive results involve trying to understand and really define how the process should run. So we
are looking for more at the future to understand ideally how the process should run.
• Explanatory results: Explanatory results involve adding the finite details of the process. We illustrate the who, when,
where, and why in the given results. These are the details of the process that we can use to further improve the process
and how it operates.

Illustration: Now consider a process such as a loan application, we can go through the process and trace the actual steps that
are in it. Then we can start with the loan applicant applying for the loan. And how the loan application goes to a specific
individual at the bank or lending institution. And that would provide the descriptive process modeling. When we talk about
the perspective, we further identify how that process should run and start prescribing process improvements. In terms of the
explanatory results of the loan application process, we can add in those fine details of who, when, where, and why. We can
add in the information, such as who is applying for the loan and who is approving the loan at various steps of the process.
Where is that happening, and at what step in the process is that happening? And why do specific steps occur?
Areas for Process Improvement
Indeed process modeling is considered important in Lean and Six Sigma process improvement efforts since it is used a
model and analyze an existing process. It then can be used to evaluate that existing process to identify areas for
improvement. We can use process modeling to determine whether we have the right people involved in the process. Whether
they’re involved at the right time or if we have too many people involved or too many hands offs. Also, we look for any type
of barrier that may be hampering process flow. These are the main types of questions that we will be asking when we model
the process. We can then use the process map flow chart to further understand the processes.
This process will cover the four types of information obtained by examining process models.
• The first type of information or evaluation is the process flow. As a team effort, it’s important to go through and document
the process flow, understand how the process currently operates.
• We then review the process and decisions within the process or determine if their product is good or bad. Then based on
that, whether or not we ship a product if it’s good or we rework the product if it’s not.
• We then start to examine the process flow diagrams, such that we can use this information to evaluate the process. One of
the ways to evaluate the process is by looking for rework loops, the second type of information we look for when
analyzing process models. In case we have something that doesn’t go right within the process, the rework loops are where
rework or modification is required that product or process.
Now, if the product is good, then it can move forward to one of two different processes. However, if it’s not good, then the
process goes down and must be reworked and then checked to see if it’s good or not. It is essential to understand where this
is happening within the process since these are the defects within the system. We can also use Six Sigma to improve the
quality of the products and services. For instance, we could be machining a shaft for an automobile, if the diameter of the
bearing assembly is too large, we might have to go through and scrap the product. If it’s too small, if it’s an undersized inner
diameter, then we can go back and rework the process.
Also, we have to check to see if the product is good at that point. Now all of this rework extra time, money and resources is
aimed to improve what was performed incorrectly initially, rather than focusing efforts on being more proactive. Rework
loops are another type of information we look for when process modeling. For instance, these could be products waiting in
queue. This means whenever we have something waiting in queue or we stop production for lengthy deliberations to
determine what that next course of action should be, this causes a process delay. When we have these steps within the
process, then we interrupt the flow and delay the lead time for when the product should reach the customer. Let’s take an
example, if we try to change an internal payroll process but waiting for someone to give the go ahead and finally that person
incharge tells, that it is okay to move forward with the step, that’s a delay within the process. That increases the overall lead
time for that step.
The fourth and last type of improvement we want to gain from process modeling is opportunities for improvement. So when
we go through and identify the various steps within the process, we also want to identify where there’s waste in the process.
Here, waste in the process indicates the things that we want to eliminate, such as lengthy delays and rework queues. These
are all non-value added activities that further delay the time from when the customer places the order of the product to the
time the customer receives the product.

Various Types of Process Maps

“A process map is a series of symbols and lines that show the flow of activities within a process.”
When we draw a process map, it shows the process boundaries. This really helps to scope the process from
where the process starts to where it ends. We can also use the process map to show different functional
boundaries. In addition, we can show interactions between steps and how they are related. On the other hand we
can also use the process map to show any disconnects within the process. As we draw the process map, we
need to include all value added and non-value added activities. We then analyze the process, we and use the
various steps included to identify what’s non-value adding within the process and eliminate thereafter. We can
also identify where there are bottlenecks within the process. Therefore, we can use the process map to show the
inputs into the system, the information, applications, or materials that are used within the processes. As well as
the system outputs, such as finished goods, services, or tasks.

•Responsibility Matrix: One of the highest level types of process maps is the responsibility matrix. A sample
responsibility matrix is used to analyze a process by looking at the steps within the process, and who really owns those
processes. Primarily there are four key levels of responsibility within the responsibility matrix which include responsible,
accountable, consulted, and informed. At times the responsibility matrix is also commonly known as a RACI chart. RACI
chart is used to show the various individuals that are involved within a process, the process itself. And then who is
responsible, accountable, consulted, and informed. By identifying those four different levels of responsibility, we will also
be able to provide good communication within the process. As well as align ownership from the different aspects of the
process.
•Top-Down Chart: One more type of process map is the top-down chart. Here, a sample top-down chart displays – The
header of the table displays the process together various rows and columns in the table. The column headers include
establishing contact and writing proposal. It is quite similar to the RACI matrix, but provides a little bit more information
about the process.
•Functional deployment map: Further a third type of process map is the functional deployment map, also known as a
swim lane diagram. The functional deployment map depicts the information from the responsibility matrix and the top-
down chart, as well as additional information. We would focus on capturing the functional information by each functional
department. We map the steps within the process as we go through each step, and show any changes in the process. Any
time the flow crosses one of the swim lanes, or functional barriers, we’re showing the hand-offs and where those occur
within the processes. This tool is very beneficial since it captures those hand-offs between each of the departments. It also
helps show some of the complexity within the process because the nature of the hand-offs and any potential information
that could be lost as well as any type of miscommunication due to changes within those barriers.
•Workflow Diagram: The fourth and the final type of process map is the workflow diagram. Workflow diagram is
primarily used to depict the various movements of individuals involved within the process also referred as ‘spaghetti
diagram’.

Symbols in a Process Map


Process map is made up of a variety of symbols each of which has a very distinct meaning.

•Oval: The first symbol is an oval. It represents the start or the stop of a process.
•Rectangle: Within the process, we will have several steps or operations and each of these steps is represented by a
rectangle. Typically, within each of these steps, we would also include text that describes what’s happening within that
step or that operation.
•Diamond: A diamond is used to depict a decision that has to be made within that step of the process.
•Circle: A circle is used to depict inspection or review steps during the process.
•Delays: Delays within the process are indicated by the D shape.
•Parallelogram: In case there is an input or output within the process, they are represented by a parallelogram.
•Flow line: Once we have the various process mapping symbols placed in order, a flow line is placed between each step to
indicate process flow.
•Arrow: One is using an arrow to indicate transportation or handling. An arrow with a jagged line represents transmission.
•Rectangle with rounded edges: A rectangle with rounded edges indicates an alternative process.
•Square: In case there is a measurement within the process, it’s represented by a square.
•Upside down triangle: An upside down triangle represents storage.
•Trapezoid: A trapezoid is used for representing manual operation.
•Rectangle with inside lines: Any sub-processes are represented by a rectangle with inside lines at both ends.

Note, different organizations or industries may vary in how they use symbols. There are so many possible
symbols that can be used and some are used less commonly than others. It is therefore always suggested to
include the legend or a symbol key within the process maps. This makes it simpler for others to understand the
process flow maps. Once we identify what the process is and what the steps are, we need to start putting the
pieces of the map together. It’s a process similar to putting the pieces of a puzzle together. It is important to
ensure that the team has correctly identified all of the possible steps within the process. As a team, it is required
examine the symbols from beginning to end as we walk through and follow the actual process. This ensures that
we are validating that we have truly captured what the process is. We could also suggest any other activities that
need to be included, to make sure we are capturing everything happening at each step.
Process of creating a Process Map
We shall now study the steps involved in creating a process map. Before we begin to create a process map,
several elements are required to ensure we have the information that is required.

•Who: The first set of information is who is involved in the process. We need to understand who is involved within the
process since those are potential people that we will interview and work with to fully understand how the process
operates.
•What: The next element are the what’s. This indicates the steps and activities that are involved within the process.
•When: We also need to know the when. Once we have the steps and activities, we would need to sequence those steps
and activities based on the proper order of the process.
•Where: The last set of information involves the where. We need to know where these things happen, where inputs come
from, and where outputs go.

Here we assume that we have got the preliminary information, who, what, when, and where of the process.

The process of compiling the process map includes four steps

1. The first step is to define the process boundaries. We need to know what steps fall within the process scope.
Basically we intend to ask, where does the process start, and where does the process end. So let’s explore a brief
scenario to understand how these steps would occur.

Illustration: We start by assuming that we are a Six Sigma Green Belt. A refining company is asking to document
the process currently being used to deliver refined gasoline to service stations. The objective of the management
at the refining company is to maximize the usefulness of the on-road time of delivery trucks. This can be done by
optimizing the delivery routes by service station locations, and also the consumption rates.
The first step we take is to meet with others on the project team. These could be people who schedule the
deliveries, people that process the orders, and from each service station or others that calculate the amount of
gasoline per delivery vehicle. This helps to enable and discover that the service stations measure their average
selling rates. And their regular refills are scheduled based on the output rate of the gasoline. Service stations can
also make emergency requests if they’re selling greater than expected amounts of fuel. So based on this
information, one should confer with those involved in the process to determine where the process should begin
and where it should end. So we would begin mapping, creating the map of the delivery process, by defining what
those process boundaries are using this information. At this point, we can determine that the process begins
where the service station is low on gasoline and they send an order to the refinery. This process ends when the
delivery vehicle returns to the refinery. Anything between those two steps is a process that we’re going to
investigate. And those are the steps that we need to understand.

2. Once we have determined all the steps that are involved, we ready to move to step two, which is to list those
steps in order. This is typically done by consulting with those involved. We want them to go through and analyze
each of those steps to put them in an appropriate process order. For this, it is important to start at the high-level.
Begin by defining what those major tasks are and any decision points within the process. Once we have those
high-level major tasks, then we can further break those steps down. This is commonly done using sticky notes so
that we can move the processes around and add in finer detail.
3. By using this information, we can then create a process map with symbols, which is step three. Now, this can
be done with flowcharting software if we have become more proficient with it, but otherwise, it can also be done
by hand. Abiding with the scenario, the team begins to create the process map by looking at the first three steps.
The process map starts with the beginning of the process, and that’s when the station is low on gasoline. Here,
parallelogram is used to represent an output. With reference to the illustration this is the station sending an order
to the refinery for more gasoline. Also the rectangle is used to represent the order being dispatched to the
refinery’s delivery system.
4. The fourth steps of the process are defined next with the help of the illustration. We have an input of the
refinery receiving the order, and that’s represented by a parallelogram. This step leads to a fuel truck being filled
at the refinery. There’s a separate process for filling the fuel trucks which is represented by a rectangle with two
vertical lines on the ends to represent the predefined sub-process.
5. The fifth step in the scenario process is represented by a half-oval or the D-shape. And this is indicating there’s
a delay in the process as the truck drives to the station. Filling the station tanks is another predefined
subprocess. Therefore it’s being represented by a rectangle with two vertical lines on the ends.
6. Finally, the last three steps within the process start with a decision. The diamond is used to indicate there’s a
choice being made, depending on whether or not the truck is empty. If it is empty, the process ends with the truck
returning to the refinery. And for that, the word yes would be written next to that line that connects to that
decision. The word no would be written next to the line that connects to a decision if the truck is not empty.
7. The next step is represented by a parallelogram that indicates that the delivery agent retrieves the information
about the next order to be filled. This step is connected to the half-oval previously drawn, which represents the
delay where the truck drives to the station. The process continues in this manner until the truck is empty and
needs to return to the refinery. So once we have created the process map, there’s one more step in the process,
and that is to verify the map. This is done by sharing it with the relevant stakeholders. These are the people that
need to understand the map, and those that need to carry out the steps. We want to ensure this is done so that
we cannot make the necessary corrections. We should have stakeholders review the process map, identifying any
areas that are not clear or where additional information is needed. Once that’s done, we can implement any
necessary tweaks to finalize the process map.

Process Map interpretation


The objective of mapping a process is to model and analyze the process. We can use the process map to
evaluate the existing process and identify areas of improvement. We here look for good flow, opportunities for
improvement, delays, and rework loops. For instance we have delays within the process and also have a rework
loop that we want to try to avoid. When we examine opportunities for improvement, we can eliminate that rework
loop by making several changes within the process. For instance, perhaps we could fulfill multiple orders at once
by keeping the delivery going until the truck is empty. This removes that decision and data loop and helps
streamline the process and force that improvement.
Illustration: We should now consider an example of how to analyze a process map. Let’s consider a process
where we are creating a casting mold using green sand. The process that we’re considering is a sand muller. In
which case the sand muller takes in several components –
•First, it has the clay that comes into the system.
•It has water and it has sand. There are three aspects that are placed into the machine that performs the mulling. This
would be a rectangle.
•Once these three ingredients are mixed together, there’s a decision.
•The decision is when the green sand is tested to see if it has the right composition.
•In case it does not have the right composition, this is broken down and sent back to the mulling machine.
•But if it does have the right composition, yes, it goes on to the next step of the process, which is where the casting molds
are created.

Note that when we have a no, it creates a rework loop and a delay. Since rework and delays are forms of waste,
the team would note this as an obvious improvement opportunity. An ideal process would be improved to
eliminate the need for the rework loop in the future.

Written Procedures
Under Lean Six Sigma, it is important to understand the concept of documenting the procedures and steps of a
process, in other words, the need for process documentation. At all point of time written procedures are
important because they help workers and employees to understand what they need to do. Some of the benefits of
written documentation process are –

•By work instructions in place, we can ensure that everybody is following the same process and therefore we can reduce
variations within the process.
•Ensures that the results are repeatable.
•Also it is important to preserve the knowledge of the best practices and further drive these best practices by documenting
those in the processes.
•Written procedures are used to describe a process at a general level, at a high level.

Example – In manufacturing and operations, process documentation could be the operation sheets that provide
the technical specifications for the products that are being produced. For a service organization such as a call
center, the process documentation could be the specific scripts when a customer calls with a certain issue. The
International Organization for Standardization, or ISO 9000, provides requirements for documentation and
actions. ISO is a system that takes into account various aspects of quality management and known standards.
These standards are used to provide guidance and tools for organizations that want to make sure that their
products and services consistently meet the customer’s requirements. They also drive process and continuous
improvement.

Essentially, written communication reflects the organization’s quality standards so that the individuals that are
reading them understand when quality inspections are in place and what happens if a product or service does not
meet those quality standards. Written procedures are owned by the people responsible for that process. These
people are also responsible for making sure that the written procedures are available and updated timely when
changes are made to the process.

Elements of Written Procedures


•Purpose and a scope for what the procedure’s covering.
•Terms and definitions must also be included so any terminology not commonly known to potential readers is clear.
•Necessary documentation about the procedure itself as well as methods and responsibilities of the procedures.
•Training should also be included in the process of written documentation. Training could include how to use
measurement devices and how to properly record quality data and actions to take if defects are discovered.
•Written procedures should also include pertinent review information as part of document control, outlining any changes,
updates or modifications including by whom and when.

Work Instructions
Work instructions are a very common form of process documentation in Six Sigma. Work instructions are a bit
different from written procedures as work instructions provide a specific description of the process, including the
sub-processes, and they include much more detail. Work instructions may also include technical drawings,
specifications, safety reminders, and tips. Work instructions are typically written and used by the people who are
actually performing the task. Such specific information on how to perform that process is typically a copy of the
work instructions that are provided in the work area. Therefore it becomes easy accessible to the people that are
performing that specific work.

In order to understand the difference between written procedures and work instructions we consider an example
of each. We consider comparing an example of conducting 250-hour maintenance of ore-hauling trucks.

They clearly differ in terms of the degree of detail –

•Written procedures provide a very high level outline type summary it starts with performing running checks, shutdown
and lockout. Deactivate the fire suppression system, checking the cooling, lube, and fuel systems, and so on. Written
procedures also includes inspect cab, check front and rear wheel assembly, and check air intake system.
•On the other hand when we look at how that translates into the work instructions, it includes much more information in a
much more detailed manner. If we consider just step one in the written procedures, performing a running check, and break
that down into finer details as work instructions. This would include inspection, override, low idle, dynamic RPM, box
interlock, and so on. It also includes propel interlock switches and hoist and box pins.
•Also it is essential to note that the written procedures and work instructions complement each other since they should
relate to each other, but each performs a very unique function. The written procedures are high level. Whereas work
instructions get down to the very specific sub-steps needed for each of the steps within the written procedures.

Introduction to Probability
Probability is defined as a measure of the likelihood of an outcome that is mathematically represented by a
capital letter P. In general the concepts of probability are provided with some simple examples, such as tossing a
coin or rolling a die. So when we roll a die once, the chance of getting a 6 is 1 out of 6. Similarly when we toss a
coin, the chances of getting heads are 1 out of 2 because the options are either heads or tails.

Probability is expressed as a decimal or a percentage. For instance when we talk about quality, we are typically
considering defects. This means if we think about the probability of an outcome of a process being defective as
0.3 then we can also express this as the process having a 3% chance of having a defect. There are several
important rules to understand when talking about probability. Since we are talking about a percentage, all of the
probabilities will be between 0 and 1, with both values being inclusive.

•The sum of all the probabilities within a sample space will be equal to 1.
•The probability that an event cannot occur equals 0.
•Probability that an event must occur equals 1.
•Probability of an event not occurring equals 1 minus the probability of that event occurring.

Key terms used in process of calculating the probability


•Sample Space: Sample space represents a set of all possible outcomes of an experiment.
•Event: An Event is a set of outcomes in that experiment. There are two kinds of events, simple events, and compound
events.
•Simple events: The Simple event is defined as a single outcome of the performed experiment or an event that cannot be
broken down anymore.
•Compound Events: Compound events use a bit more sophisticated probability and are divided into mutually exclusive
events, dependent events, and independent events. When talking about the Compliment probability of events, those events
can only have two outcomes. They either happen, or they do not happen. There are two alternatives in the formula.

There is indeed the probability of the event not happening, which equal to 1 minus and the probability of the event
is happening is 1. Then, thinking back to the rules of probability, the probability of the event happening plus the
probability of the event not happening must be equal to 1. For example, if we think about the probability of not
getting heads in a coin toss, we’re 50% likely to get heads, and then we’re also 50% likely to get tails.

Simple and Mutually Exclusive Events


We illustrate the simple and mutually exclusive events with the help of an example. Simple event can be defined
as the probability of event An equals the number of ways that A could occur divided by the number of all possible
outcomes. For instance, if we explore the probability of getting tails in a coin toss, there’s only one way to get a
tail, therefore the numerator is 1. But there are two possible outcomes, heads or tails, so the denominator is 2.
This makes the probability of event A to be 0.5, or 50%. Similarly when we look at the probability of rolling a 3 on a
single sided die, the probability is 1/6, or 0.167, or 16.7%. Also if we think about the probability of rolling an even
number, then that would be a two, a four, or a six. Therefore, for events to occur even numbers the probability
when we have the numerator as 3, since there are three possible outcomes and the numerator is still 6, and
therefore, the probability is 0.5, or 50%.

We now consider a little bit about what mutually exclusive events are. For instance, when we flip a coin, it lands
either on heads or tails, but we can never have both. Therefore, they are mutually exclusive. Similarly, in a Six
Sigma context, we can have a product that’s either defective or not defective, but a product can’t cross both of
those boundaries.

In terms of Venn diagram, where these two events do not overlap is represented as mutually exclusive events.
When we talk about two events being mutually exclusive, there we are looking for the probability of A or B
occurring, in which case we can add that probability of A and B together.

Illustration – Given data, we know that the probability of obtaining no defectives in a sample of 100 items is 0.1.
The event A and the probability of obtaining one defective item in the sample is 0.15, which is the event B. If we
want to know the probability of obtaining not more than one defective item in the sample of 100, that means we
could either expect 0 defectives or 1 defective. And since there are mutually exclusive events, we’re looking at
both events, where we have no defectives and exactly one defect in this example. Therefore, we can add the
values of 0.10 and 0.15 and get 0.25, or 25%.

Independent and Dependent Events


Here we will discuss how to calculate the probability of independent and dependent events. The probability of
calculating two events, A and B, that are independent, is determined by multiplying the probability of A by the
probability of B.

Illustration – Let us suppose we have a jar that contains four red candies and eight black candies. If we select
two candies at one time from the jar, and as we select each one, we replace it immediately after it is selected. We
want to know what the probability is of us selecting one red and then one black candy.

Solution: Each selection of a candy is independent, since we are returning back to the original state. There are no
dependent factors on the next candy selection. Note that we are drawing two candies from the jar, one at a time.
So when we talk about the probability of the first event, since we have 4 red candies out of the 12 total candies,
the probability of selecting a red candy is 0.333, or 33%. When we talk about event B, we are looking for the
probability that we select a black candy. 8 of the 12 jellybeans are black, and so that gives us a probability of
0.667. The probability of A and B is then calculated by multiplying the 0.333 times 0.667. And that gives us 0.222,
or 22.2%. Also

It is important to note that if we are drawing two candies from the jar at once, but not putting the first one back in
the jar then it becomes a dependent event, which would lead to totally different outcome. Therefore we must
understand the difference between the event with replacement and without replacement. In this illustration, we
performed the event with replacement. As we were putting the drawn candy back into the jar immediately after it
was selected. Therefore Each selection was independent.

For dependent events, the probability of (A and B) is calculated as the probability of (A) times the probability of (B
given A).

Illustration – Find the probability of pulling two cards out of a deck and then getting two aces. Here we are pulling
two cards from the deck one after another, and not replacing the cards back.
Total cards in a deck = 52

P (First card is ace) = P (A) = = 0.0769

P (Second card is ace) = P (B) = = 0.0588

P (Both card are ace) = P (A B) = 0.0769. 0.0588 = 0.0045 = 0.45%

Illustration – From a Six Sigma perspective let’s consider an example we want to check the quality of a shipment
of bottles. It is believed that the supplier has a defect rate of 8%. Now we select 2 bottles at random from a
sample of 100 and pulling them one after another and testing them. Here we wish to know the probability that the
first bottle would be defective and that the second one will be defect free.

Solution – We are going to use the probabilities based on this being a dependent event. We calculate the
probability of an (A and B), which will equal to the probability of (A) times the probability of (B given A). It is given
that the probability that the first bottle will be defective is 8%, so this value would be 8 out of 100. Since we have
pulled these two together without a replacement, the probability of (B given A) in other words, the probability that
the second bottle will not be defective, is 100 minus 8, or 92. We’re now down to 99 bottles, because we didn’t do
replacement. This gives us a value of 0.0800 times 0.9293. When we multiply these two values together, we get a
value of 0.0743, or 7.43%.
Addition Rule

Addition rule for Mutually Exclusive Events


Mutually exclusive events are two events that cannot overlap. For instance, if we consider two events A and event
B, that do not overlap between these two events. Therefore, when we talk about probability of event A or event B
happening, we could add the probability of A plus the probability of B.

Illustration-
P (Rain) = P (A) = 0.2
P (Snow) = P (B) = 0.6

We wanted to know what the probability of it raining or snowing which are two mutually exclusive events,
therefore we would add the probabilities together.

P (AUB) = P (A) + P (B) = 0.2 + 0.6 = 0.8

The probability of rain and snow would be 0.8 or 80%.

Illustration – There is a transportation company that wants to know if regulating its buses to a maximum speed
of 65 miles per hour would be fuel efficient. Probabilities have been established for the average speed the
company’s buses currently travel.
•The probability that a bus is traveling between 66 and 69.99 miles per hour is 0.141.
•The probability that a bus is traveling between 70 and 74.99 is 0.087.
•The probability that a bus is traveling above 75 is 0.007.

What is the probability that a bus is traveling faster than 65 miles per hour?

Solution – Since the bus can only travel within one speed interval only, so these events are mutually exclusive. As
the number of buses traveling faster than 65 miles per hour, all three probabilities are included in the calculation.
The formula used to find the probability of mutually exclusive events is

P (A or B or C) = P (A U B U C) = P (A) + P (B) + P(C) = 0.141 + 0.087 + 0.007 = 0.235 = 23.5%

Addition rule for Non-Mutually Exclusive Events


For non-mutually exclusive events, we have to overlap in the areas between two events.

Now if we were to use the same formula for the calculations, we would essentially be counting the overlap area
twice. Therefore, the calculation would change. When we’re figuring out the probability of A or B, we add the
probability of A plus the probability of B and also subtracting out that intersection of the two to essentially make
sure that we don’t double count that area. So If we look at the probability of A with union B, it’s the probability of A
plus the probability of B minus where we have the intersection of A and B so as to make sure that we don’t
overlap twice.

Multiplication Rule
With reference to the multiplication rule, we are trying to determine the probability of more than one event
occurring at a time. For instance, the probability of event A and the probability of event B both are occurring
together.

Illustration- (with two independent events) We would check the probability that the sirens at a hydroelectric
facility would fail was tested, and the result was 0.02. The probability that the sirens at the nearest town would
fail was tested as well, and the result was 0.03. These are two independent events. But we would like to know the
probability of a siren in both locations failing.

P (A B) = P (A). P (B) = 0.2. 0.3 = 0.06 = 6%

We now explore the multiplication rule as it applies to dependent events, with an example to illustrate the
calculation.

Illustration – We have a wholesale grocery store that’s expecting a new shipment of 200 crates of tomatoes.
Unknown to the management at the location, 2 of those 200 crates are defective. What is the probability of
randomly selecting the 2 defective crates from the lot of 200?

Solution – We can now find the probability using the multiplication rule for depending events.
We first determine the probability of A and B. This is expressed as the probability of A times the probability of B,
given A. Event A is defined as picking the first defective crate. And event B is defined as selecting the second
defective crate. Since the first crate picked is not going to be returned to the lot, therefore these events are
dependent. The occurrence of an affects event B by reducing the sample size from which event B is going to be
drawn.

P (A B) = P (A). P (B) = 0. 01 X 0.005 = 0.0005 = 0.005%

So the probability of event A is going to be those 2 defective crates out of the 200 that were shipped. Since
whatever crate is pulled first is not going to be returned. Now, we’re looking at one defect out of the remaining
items, which is 199. And that gives us 0.01 x 0.005. This results in 0.0005 or 0.005%.

Illustration – Assuming that we have a standard 52-card deck, we want to calculate the probability of dealing
three eights in a row. But we’re going to return the cards to the deck and shuffle those between each draw. The
calculation would be the probability of A and B and C. These are independent events, so we’re going to multiply
the probability of A times the probability of B times the probability of C. Since we return the card that we draw and
then shuffle the deck again, we have 4 out of 52 cards, since there’s four eights in a deck of 52 cards and we’re
replacing them each time. We multiply 4 divided by 52 times 4 divided by 52 times 4 divided by 52, and that gives
us a value of 0.00046 or 0.046%. So the probability of drawing 8 three times in a row is 0.00046 or 0.046%. But
what if we have three dependent events? Now we decide we’re not going to return the cards to the deck between
each draw. So we still have a standard 52 card deck, and we’re going to calculate the probability of A and B and C.
Since we’re not returning the card to the deck, we’re going to multiply the probability of A times the probability of
B, given A, times the probability of C, given A and B. On the first draw, we’ll have 4 out of 52 possible cards. In the
second draw, we will now have 3 out of 51 cards, since we haven’t replaced the first card we drew. In the third
draw, we will have 2 out of 50, since we haven’t replaced that other card. And that gives us a value of 0.00018 or
0.018%. This means that the probability of drawing three eights in a row is 0.00018 or 0.018%.

P (A B C) = P (A). P (B). P(C) = 

Permutations
With reference to the concept of probability, it is important to understand the concept of permutations.
Permutation refers to all the possible ways that we could do something.

For instance, if we have one red circle, one green circle, and one blue circle, and we wanted to calculate all the
different combinations that we could have with these three circles, here we would use the permutations formula.
Permutations could also be used while considering all possible combinations for a safe or determining how many
access codes can be created for a security system from a certain number of digits. When using permutation, we
represent a set of objects in which the position or order is important. So it is vital for us to understand the order
of the steps for the mathematical formulations.

The formula for permutation equals n factorial over (n – r) times factorial = 

In the formula, nP is the number of permutations where n is the size of the larger group and r is the size of the
smaller subgroup. And the exclamation mark is a mathematical symbol for factorial. If we have n factorial, that is
calculated by multiplying n times n- 1 times n- 2 times n- 3 until we get to the value of 1. So if we want to
calculate 4 factorial, we multiply 4 x 3 x 2 x 1, which gives us a value of 24. If we want to calculate 6 factorial, we
multiply 6 x 5 x 4 x 3 x 2 x 1, which gives us a value of 720.

Illustration – We assume that there are 10 candidates such that all of the possible candidates have similar skill
sets and an equal chance of being selected. We need to select 3 candidates to fill the position. But considering
that the same person cannot hold more than one office, in how many ways can the team select those office
bearers out of its members?
Solution – Here order is important, since we need to select one of each. Hence we are required to apply the
permutation formula. For this formula,

n = 10 and r = 3

Number of permutation in which the seats are going to be filled =   =  = =


720

Combinations
One of the most important concepts in probability is combinations. Now, if we are familiar with concept of
permutations, then we surely know that order matters. So we can think of the P in permutation as standing for
selective. The difference between permutation and combinations lies in the fact that order does not matter while
dealing with combinations.

Now, when we look at the formula for a combination, it’s similar to the calculation for permutations. However we
now divide by the r factorial, since the combinations don’t matter. The combinations formula tells us the number
of ways we can select ‘n’ objects, taking them in groups of ‘r’ objects at a time. In the combinations formula, we
have the symbol for the number of combinations, we also have the permutations. Where, n denotes the size of
the larger group and r is the size of the smaller subgroup and then we have the factorial symbol.

Illustration – Calculate the probability of getting exactly 3 heads in 5 flips of a coin using combinations.
Solution – Since there is no order necessary, so we can apply the combination formula here. Here we try to find 3
heads in 5 flips of the coin.

P (3 heads in 5 flips of the coin) =  =   = = 5. 4 = 20

Probability Distributions
Statistical distributions and probability distributions are the same thing. They are simply two different terms.
Probability distributions, refers to a listing of the outcomes of an experiment and probability distributions are
used to link each outcome from that experiment to a probability of its occurrence. In Six Sigma, productivity
distributions are considered very useful in data analysis and decision making. The role of the probability
distributions is to answer questions, like what is a probability that a specific product would be defective? Or how
likely is it that there will be so many defectives per million parts? Now the ultimate objective within Six Sigma is
to have good data analysis. Now all this leads to better information about the population and better decision
making. However, it’s important to understand that a data collection exercise may deliver a dataset that does not
fit a commonly known distribution. Here we would need to fit a known statistical distribution to a dataset and
have what’s really more part art and part science. Probability distributions are used frequently within Six Sigma
projects, particularly in the measure, analyze, and control phases. Where tools such as statistical process control
and defect probability determination is calculated. When we’re using sampling for determining the population
parameters, we could use tools such as hypothesis testing, confidence intervals, and predictive analysis.
Statistical distributions are used to model these sets of data to simulate observations about similar or larger
populations.

In general, we use samples to infer information about that larger population. It is essential that we’re able to
choose the appropriate distribution so that we are representing and describing the data. As Six Sigma
professionals we must be able understand the behavior of probability distributions so as to determine the
appropriate probability values within a given range. This will help provide information on the variation that we
were trying to address in a specific process or product. Such that random variables are the outcome of an
experiment and as an outcome, they would have numerical values, which can be continuous or discrete. Range
would typically be represented as a probability distribution. In general, Six Sigma practitioners use these process-
specific random variables for the statistical test to generate probability distribution to predict the probability of
certain events occurring. Thereafter, inferences can be made about the sample data, even if it’s not a
considerable amount of data. But knowing the probabilities of that known distribution is helpful for the Six Sigma
practitioner for decision making. When we talk about random variables, these could be continuous variables or
discrete variables. Continuous variables are variables where we have a range of possible outcomes. Typically, if
we think about a variable and we can divide it by two, and the resulting number is still feasible, it’s probably a
continuous variable. Some of the examples of continuous variables include time, height, weight, temperature, and
cost. Now, when we refer to discrete variables, these are variables that are more mutually exclusive which would
answer a yes or a no question. We could have a range of different colors to choose from or when we think about
sizes; we can have small, medium, or large. When it comes to gender, we would have either male or female. We
could have pass, or fail, or yes, or no as options.
Normal Distribution
Indeed there are many types of distribution, we shall discuss the normal distribution commonly called the bell-
shaped curve due to the shape of the distribution, and also called the Gaussian probability curve. The normal
distribution is defined by two key parameters, the mean (or µ) and the standard deviation (or sigma denoted by ).

Common characteristics of Normal Distribution


•With the standard normal distribution, the z-distribution, there’s a standard deviation of 1 and a mean of 0.
•The majority of the values are clustered around the mean.
•It also has a single peak, so is considered uni-modal.
•The distribution is also symmetric around the mean.
•One more key characteristic of the normal distribution is that it has the same mean, median, and mode.
•The tails of the distribution extend to infinity in both directions.

Within Six Sigma, the normal distribution is by far the most popular and commonly used distribution since a
significant number of natural and man-made systems can be modeled by using this distribution. The normal
distributions are described by the mean, where the peak of the density occurs, and the standard deviation, or
sigma, or the spread.

Calculating Probabilities from Z-values


In order to calculate the probabilities we will explore standard and normal distribution and how to calculate
probabilities from Z-scores. The focus is on the standard and normal distribution such that standard normal
distribution has a mean of 0, and a standard deviation of 1. This is also commonly referred to as the Z
distribution, and is linked to the Z-scores. Any normal distribution can be converted into this. And then we’re
going to take the areas under the curve and assign those probabilities based on a table. As part of this
probability, we are going to use Z-scores. And Z-scores are calculated by subtracting a data point from the mean,
and dividing by the standard deviation. Now it’s important to understand that mu and sigma in the formula
represent population parameters of which sample data points are an estimate.

Illustration – For instance we are analyzing the time it takes to produce a computer chip, the mean time per
manufacturer is 150 seconds and the standard deviation is 30. We want to determine the likelihood of a chip
taking longer than 165 seconds to produce. Using this formula, we subtract the mean, 150 seconds, from 165,
and divide by 30. For which is the standard deviation, and get a Z-value score of 0.50.

It is important to recognize that this is a Z-score, and not a probability. So in order to determine the probability, we
need a standard normal distribution table where the standard normal distribution table displays. There are
various rows and columns in the table. The standard normal distribution table is used to determine those
probabilities. When we review the standard normal distribution table, these are cumulative probability values. In
other words, they include all the values that are negative or on the left of 0 or mean. If we’re trying to understand
the area and the probability on the bell curve, we can look at the distribution plot. The x-axis indicates the value of
x. The y-axis indicates the density. What we’re trying to do is equate values with that probability

Binomial Distributions
This topic will explore binomial distributions, which are used when we only have two possible outcomes. For
instance, it could be that the product is defective or not defective, or it could be a yes or a no type answer with
only two options. Binomial distributions are used to help determine the probability of a number of successes over
a given number of trials that is used for discrete data. Some other examples are whether a product is good or
bad, whether the answer is true or false, or the number is 0 or 1. Here, the binomial distributions are used to
determine the occurrence or probability of an event, but not its magnitude. For instance, it could tell we if a
prototype will work or not, but it is not going to tell us how long it will continue to function. It is essential to
understand that these elements should be independent and that there should be no overlap. Binomial distribution
could also be used to determine or investigate the process yield. It is also very helpful in sampling for attributes
in acceptance sampling. Most commonly used to determine the number of units that would fail under warranty.
Also binomial distribution could be used to estimate the number of people that will respond to a survey. The
actual binomial distribution formula is used to calculate the binomial distribution, for which we would need the
number of trials

n is the sample size


p is the probability of a success in any one trial
x is the number of successes that are desired

Since we can calculate the distribution based on that formulation, the binomial table is a more efficient way to
determine the distribution. If we know the value of x, which we would need for the formula anyways, the x value is
the desired number of successes. On the left-hand side of the column is where we can look up the value of x.
Then, if we know the value for p, which is the probability of success for any one trial, that is provided at the top of
the table. Therefore, if we know we have a value of x of 3, and the p-value is 6, we can use that to determine the
probability is 0.346 (using tables).

Poisson Distributions
We shall now discuss about Poisson distributions which is primarily used when we have occurrences within a
unit of space or time. A Poisson distribution graph with three curves displays. The slopes of the three curves are
different and are also used for a fixed observation period. Poisson distribution considers events that occur at a
constant rate, but that are also independent. For instance, we could be modeling defect counts per unit, or
failures over time, or traffic flow, or arrival times. The Poisson distribution is most commonly used for modeling
rare events because it’s ideal for the situation. Unlike the binomial distribution, the Poisson distribution is
unbounded, and that means that it deals with counting an event sequence that does not have an apparent end.
One of the other key differences is that the binomial distribution looks at cases of either success or failure in
other words, whether something has a defect or does not have a defect, whereas, Poisson distribution only looks
at defective cases.

Poisson distribution formula

lambda is equal to the mean and the variance of the Poisson distribution,
x is the number of defects,
e is the base of the natural log also referred as Euler’s constant

Here, it is important to note that the mean and the variance are equal to the squared value of the standard
deviation.

Illustration – The number of defects in fuel injectors produced by one of the company’s factories averages 3.5 per
day. If we assume an equal daily production, we want to find the Poisson probability of getting exactly 6 defects
on a given day in the factory.

Solution – Therefore, we could use the equation to determine the probability that x will be equal to the probability
of 6 defects.

We can set up a formulation where the probability of 6 is equal to e to the -3.5. And the 3.5 comes from the
average of 3.5 defects per day multiplied by 3.5 raised to the power of 6, since we’re solving for 6 defects divided
by 6 factorial. Therefore, the probability of having exactly 6 defects is equal to 0.077, or 7.7%.

Chi-Square Distributions
We primarily use the distributions such as the chi-square, F, and Student’s t addresses a sample to find the
cumulative probability of the observed data. Chi-square distribution helps to make decisions, test hypothesis, and
construct confidence interviews. Chi-square distribution graph displays four curves with different slopes. The x-
axis indicates the chi-square distributions. The y-axis indicates the probability density. The chi-square distribution
and its sample statistic are used to investigate whether distributions differ from one another. Sample data can
then be generalized to more complex situations, as one desired information from one production line can apply to
the whole factory. This also helps us test how close the actual data fits or follows a normal distribution. It may
pass or fail the confidence test based on the chi-square value. Chi-square distribution and is related to the normal
distribution in the following way, where Z is the standard nominal distribution. The chi-square distribution begins
at 0, and it continues to positive infinity, so that we can have no negative results. A chi-square distribution plot
graph displays. There are three curves with different slopes. The x axis depicts the value of x. The y-axis depicts
the density. Now, by increasing the value of K, or the degrees of freedom, the chi-square distribution approaches
more of a normal distribution.
Both µ and sigma depend on k, the degrees of freedom; this is the parameter that determines the shape of the
chi-square distribution. We have sigma, which is the standard deviation of the new population and sigma sub
zero, which is the standard deviation of the original population and s is the sample standard deviation for the new
population.

Student’s t-Distributions
The Student’s t-test or t-distribution is typically used to determine if two sets of data are significantly different
from each other. Student t-distribution commonly used when the test statistic which follow a normal distribution.
When we look at the Student’s t-distribution, it’s similar to the chi-square distribution, in terms of the shape of the
distributions are affected by the degrees of freedom. A specific t-distribution is defined by its degrees of
freedom. For a student’s t-distributions graph displays there are three curves with different slopes in the graph.
The shape of the distribution is similar to the state or normal distribution. But there’s greater spread due to the
uncertainty about the standard deviation. As the degree of freedom, which is the (n-1) increase, the distribution
gets closer and closer to the normal distribution. This can be explained by the central limit theorem. If we think
about the Student’s t-distribution and consider the z-distribution, or the standard normal distribution. It can be
used to find confidence intervals in situations where a sample size is large. But when the sample size is small,
usually less than 30 samples, this is where the Student’s t-distribution is used. The Student’s t-distribution is
typically used for hypothesis testing.

In order to determine if there are two datasets those are significantly different from each other. It can also be
used to construct confidence intervals around the mean of a process. It’s also used instead of the chi-square
distribution when the sample sizes are small, typically less than 30. Student t-distribution is also used when the
standard deviation or the population is unknown. If we explore t-distribution, as the sample size gets larger, and
then the degrees of freedom increase. Distribution plot for the student’s t-distributions displays. Mainly there are
three curves with different degrees of freedom such that the curve begins to reach the normal distribution. We
calculate the value of t where the t-value is the t test statistic, x is the sample mean. µ is the population mean, s is
the sample standard deviation and n is the sample size. There are four steps to using the t-distribution table to
find the t-critical value. We use a table indicating the value of degrees of table displays with various rows and
columns in the table. First, we determine the value of degrees of freedom found in the first column of the table.
And then we determine if the test is one-tailed or two-tailed. Next we, determine the desired significance level for
the test. And in the end, we find the t-critical value at the intersection of the row and significance columns for the
tail test.

F-Distributions
F-distribution graph displays the x-axis indicates the F value and the y-axis indicates the value of P (F). The F-
distribution is typically used for testing the hypothesis of equality of variances from two normal populations. F-
Distribution is also used when we have continuous variables and primarily used to model the ratio of variances.

Characteristics of an F-distribution curve


•The F-distribution curve is a plot of the ratios of two independent guide square variables.
•In an F density curve, the peak is near 1 and the values that are far from 1 in either direction are what we’re using to
provide evidence against the hypothesis of equal standard deviations.
•F distribution is defined by a ratio.
•In order to approximate the F distribution, the distribution or ratio, of two estimated variances that are calculated for
normal data is used.
•There are two degrees of freedom.
•The F distribution is characterized by two parameters, the degree of freedom of the two samples.
F distribution formula – The calculation for the F statistic is determined by dividing one randomly obtained
sample variance from the other.

This involves three main steps and is how we calculate the ratio value.

•We start by determining the population standard deviations. And the sigma1 and the sigma2 that represents the Sample 1
and the Sample 2. The Sample 1 and Sample 2 are used when we have standard deviations from samples. When we have
population standard deviations, we can use sigma1 and sigma2 to represent the population.
•Second step is to select the random variables of the desired size from the two populations and then determine the
standard deviations, the s1 and the s2.
•The third and final step is to put the values into the F statistic formula.

F statistic is the ratio of S1 divided by sigma1 and S2 divided by sigma2. This gives the ratio for F distribution. An
application of the concept here can be to evaluate data collected from two different work shifts for the same
data. We would be looking for insights to population variances, to see if they are homogeneous or not.

Data Classification
Now we move to the measure phase of the DMAIC cycle, we start to collect and summarize the data from the
processes. It’s is very important at this stage to understand the different types of data and measurement scales.
Data is all about the information that we are collecting through the measurement systems. The data is what a Six
Sigma team will use to understand the variation in the processes. Data is also used to check whether or not the
process is meets the intended targets for customer satisfaction. Therefore, at this stage, it’s very important to
understand the information to be collected for results and better output. We must get a clear understanding of,

•What is the purpose of the data?


•What insight are we trying to get from this data?

Therefore by understanding the purpose of the data, one must make sure we are collecting the right type or
classification of data. In addition, we need to understand what is needed to know. By understanding the team’s
objective for collecting this data, we can make sure that right information is being collected. One of the primary
types of data that can be collected is quantitative data. Quantitative data gives us a classification in terms of
numerical values, typically on a continuous scale. This indicates that if we take that data and divide it by 2, that
value comes out to be meaningful. For instance, if measuring candies, we have one dozen candies, and cut that
number in half. It is still half a dozen or six candies, which is still meaningful.

It is very important that that the quantitative data gives us the information on how the process is running. But it
doesn’t tell us discrete information that is whether the process is functioning good or bad. The Quantitative data
gives us more information i.e., where the process is running within the specification limits and therefore, enables
us to determine if the process is running on target.

For instance, if the target is to give every customer 10 ounces of tea, then we would make sure we have 10
ounces. We can compare that to upper and lower specification limits to determine whether the process is running
on target. Close to the high end or upper specification limits or toward the lower specification limits. We can then
use this insight to adjust the processes to ensure we are working towards achieving the target and meeting
customer’s expectations. Conversely, we can also have qualitative data. So now if we consider the above
example of the candies and tea again, then candies could be sweet or tea could be hot. Now, that gives we some
information about how the processes are running. But it doesn’t give us quite enough information as to how we
should adjust the processes to make sure we are hitting the target. So while qualitative data provides useful
information, it doesn’t give us that guiding direction on how to make modifications.

Continuous and Discrete Data

Continuous Data
We define continuous data as the data that spans various values. One way to tell quickly if the data is continuous
data is to see if we can divide the data by half and still get a meaningful value. For instance, if we are looking at
time, say 20 minutes, we can divide that in half and get 10 minutes. Some other examples of continuous data
include things like height, weight, temperature, and cost. The most important thing about continuous data is that
it gives us the information that we can compare to the specification limits in the target based on customer’s
expectations. And as a Six Sigma team, use it to determine how the process is operating.

Characteristics of Continuous Data


•Continuous data can be measured on a scale, and it has a wide range of values that could be represented.
•Continuous data can be broken down into smaller units. If we consider a variable such as temperature, so we can break
that down into 10 degrees, 20 degrees, 35 degrees.
•Continuous data includes different subcategories such as physical property data like actual measurements and readings,
or it could be resource data, such as how often a piece of equipment is being used.

Discrete Date
Now data can also be discrete. Some examples of discrete data include color (red, or green, or blue) or maybe
sizes (small, medium, large), or result (pass or fail), gender (male or female). Discrete data does not have specific
readings that we would be taking over a scale of values. It is primarily classifications.

Characteristics of Discrete Data


•Discrete data is something that describes the attribute. So we talk about clothing, we could have a medium red shirt. Here
the attributes would be the size medium and the color red.
•Discrete data cannot be broken down into smaller units. That is if we have a medium shirt, we cannot divide its size in
half as it won’t result in a valid value.
•Discrete data is measured in exclusive categories. The shirt is either red or blue or the size is small, medium, or large,
and so on. This means they are exclusive categories.
•Discrete data includes subcategories. It can be used for characteristic data, again, like the color of a shirt. It can be used
for count data, such as how many items we have of a specific value. Or it can be used for intangible data.

Difference between Continuous Data and Discrete Data


Before we explore choosing an appropriate type of data, it is important to understand whether to use continuous
or discrete data.

Continuous Data
Discrete Data
 
Continuous data helps in getting more specific
Discrete information is much easier to collect and
information about the output of the process, about
interpret.
that measurement.
It is easier to analyze since we have a very Discrete data requires large sample sizes. In
specific value addition, because of the type of data we’re
Continuous Data
Discrete Data
 
receiving, it can be subjective.
The readings provide more precise information, We are only getting fairly limited information
because it helps tell we how far off the target we about the process. We need to make sure that we
are have sufficient data.
Rather than having precise measurement
It also requires a precise measuring instrument to
instruments, we might be able to have gauging
make sure that we are getting the right
systems that quickly tell we if the part is good or
information
bad
We will rely on continuous data to make We use discrete data to create order, or to make
precision comparisons
One must understand that discrete data blends itself nicely to Six Sigma, because we can use this type of
information to calculate the sigma levels. Also we can use discrete data to make comparisons, on whether an
item is bigger or larger than another. Discrete data is also useful, because it provides the Six Sigma team with
definitive answers (Such as yes, no, pass, fail or whether a product or service is defective or not defective).

It is also important to understand how continuous and discrete data tie in with the phases of the DMAIC
methodology.
1. Define Phase: Within the define phase, we are trying to understand the portion of defect we currently have, to
understand the magnitude of the problem. This is where discrete data can be very useful to understand the
percentage of defects.
2. Measure Phase: As we move to the Measure phase, we want to understand a little bit better how the process is
operating. Therefore, this is where continuous data can be very useful to understand the process itself, and
whether the current process is operating near target, or towards one of the specification limits.
3. Analyze Phase: In the analyze phase while testing the hypothesis, we might want to test continuous data to
really understand what the impact on the process is. In addition we could also use discrete data to really
understand where the defects are coming from and whether or not specific processes or a series of steps lead to
a defect, which would be discrete data.
4. Improve Phase: Now in the improve phase, we are typically using more of the advanced statistical tools. So this
is where continuous data works well with tools, such as design of experiments, understand the impact on the
output of the process.
5. Control Phase: Finally in the control phase, this is where the discrete data comes into play. Since this is where
we can monitor the process, and determine the ratio of defects that we have.

Scales of Measurement
In the process of Six Sigma execution, it’s is very essential to determine the type of measurement scale being
used. When talking about the nature of a scale, the scale needs to relate to some sort of standard. For instance,
when we look at temperature, we need to understand if the scale is in Fahrenheit or Celsius or when we talk about
length so whether it is being measured in meters or centimeters (representing the scale). So when we think about
someone telling us that it is 21 degrees outside, it’s important to understand the difference between Fahrenheit
and Celsius.

There is a hierarchy of four levels of measurement called NOIR,

•Nominal
•Ordinal
•Interval
•Ratio
A top-down pyramid depicting the hierarchy displays. Each of those levels includes all of the qualities of the level
below it, but it adds something new. And so there is an accumulation of new characteristics as we move from the
bottom of the pyramid with the nominal, up through the ordinal, interval, and ratio. Let us discuss them now,

•Nominal Scale: Within the nominal measurement scale, we have discrete variables. It’s important to make sure that the
different classes are mutually exclusive and they should also be an exhaustive list. There is no relative ordering within the
processes. For example, if we think about nominal variables, such as race, religious affiliation, political party affiliation,
college major, hair color, or birthplace, there’s no relative ordering between those. And we could also assign each of those
values a name, such as red, blue, etc.
•Ordinal Scale: Ordinal scale includes numbers that are used to represent a rank order. This rank order’s typically used to
compare results. For instance, we could rank things first, second, third, or light, darker, or darkest. And that gives a
comparison amongst the different groups. However, there is no indication of the distance between each of these ranks. It’s
still represented with different classifications.
•Interval Scale: The next type of measurement scale is the interval scale. The difference between the ordinal and interval
is that the difference between the values is equal. In which case we understand the distance between those different
values. In case of interval scale, we are looking at scale variables. For instance, if we rank something on a scale from 1 to
10. In which case we assign a number, to understand what those equal differences between the values are. In addition, we
could look at the scale of temperature. This would be a scale variable. And the difference between each value would be
equal. The other characteristic of an interval measurement scale is that there is no absolute zero.
•Ratio Scale: Finally, we have the ratio scale. The values in this scale have a fixed 0 point. For instance, when we
consider height, or length, there is a fixed 0 point. We can compare these values using percentages or multiples because of
the type of variables we have.

Process of Data Sampling


Data sampling is considered as an important element in Six Sigma projects execution i.e., in case we are trying to
improve a specific characteristic about a product and we measure 100% of the products. Then we would have
accurate information about the population. However, it is not feasible to measure 100% of the products, due to
cost and time constraints. Therefore, it becomes important that we understand how to sample data.

“A sample refers to taking a reduced set of the population and using data from that sample set to make
inferences about the whole population”. Data sampling can be used as a tool that will help save us time and will
also be more cost effective for the projects. Also, sampling is considered effective when it’s done with some best
practices in mind.

Points to keep in mind during sampling –


•Ensure that the sample is free from bias. You must aim to get samples that are unbiased. If there is bias with that one
specific reading, then that is going to impact how we make the inferences from the sample towards the full population.
•Also it is important to have samples that are large enough to detect trends that are happening within the products and
services. A small sample would not be accurate to detect the trends that are occurring.
•Ensure that the sample represents the entire population that we’re studying. In order to do that, we must create a
sampling plan ahead of time. Keeping in mind that we need to get a representative large enough example and avoid bias
in the sampling practices.

The two key terms in data sampling refers to – homogeneity and heterogeneity. So, when we refer to
homogeneous samples, we are trying to understand how subjects are the same. But when we refer to
heterogeneous samples, we are looking at how subjects are different. Because the goal is to get a representative
sample and avoid any type of bias. During the process of sampling we all want to try to minimize the
heterogeneity in the sample as much as possible. This will help increase the accuracy of the results within the
sampling. One way to do this would be, for instance, to collect data at the same time every day.
Methods of Sampling
In the process of Six Sigma, it is important to understand the different types of sampling methods and their
corresponding characteristics. We shall be discussing different methods of sampling –simple random, stratified,
systemic, and rational sampling methods.

•Simple Random Sampling: With simple random sampling, it’s important to note that each unit in that population has an
equal probability of being selected in the sample. A sample for simple random sampling displays – four samples with
equal chances of being selected. Random sampling is useful because it helps protect against bias being introduced within
the sampling process. Random sample also helps ensure that we are obtaining a representative sample.
•Stratified sampling: Stratified sampling is usually used when we have a population that has different groups typically
referred to as strata. A sample for stratified sampling displays the population has many groups. Since we have two
different groups, it’s critical within Six Sigma to make sure that each of those groups is represented equally within the
sample. Therefore, we would take independent samples from each of the different groups of that population. Then the size
of each sample will be proportional to the relative size of each group. In this way the team can make sure that they’re
getting representative samples from each of the groups that are also representative of each group within the total
population.
•Systematic sampling: Systematic sampling is typically used when the Six Sigma team is collecting data in real-time,
during a process operation. With systematic sampling, the Six Sigma team would take samples according to a systematic
rule. A sample for systematic sampling displays, every nth item chosen from the population. Systematic sampling is very
useful while running a full production, or when the process is in operation, so that we have the frequency for sampling
that could be set up ahead of time.
•Rational Sampling: Rational sampling is used when we are able to put measurements into different groups and we want
to understand the sources of variation. For instance, we have got two production lines and in each production line, we
have got a running operator, Operator A and Operator B. Within these two parallel production lines, we can collect
samples and then compare the differences between them. Now, similar to systematic sampling, rational sampling is also
commonly used to collect real-time data during process operations, since we are gathering data on two or more different
subgroups, we are able to better understand any sources of variation within and between those groups. The rational
sampling subgroups are very useful for calculating estimates for standard deviation and they can also be very useful as a
basis for effective control charts.

Simple Random Sampling


We shall now discuss simple random sampling in more detail and the advantages of using this technique. Some
of the advantages of Simple Random Sampling –

•Simple random sampling is primarily set up so that each unit in the population has an equal probability of being selected
in the sampling.
•Simple random sampling provides the purest form of sampling together with being most cost effective.
•Random samples are set up by assigning a number to each unit within that population and then, typically, a random
number table or generator is used to pull a sampling list. This is the reason why this sampling method is considered very
cost effective, as it does not require expensive technology.
•This form of sampling provides a representative sample it’s the best form of sampling to avoid bias within the sample.
•Finally, simple random samplings work the best when we have an accurate database of the population that can be use the
information to understand the system and ensure that we are selecting an accurate random number table to get a
representative sample.

Points to remember while performing random sampling


There are several considerations for the Six Sigma team when collecting those random samples.

•It is very essential to avoid sampling error.


•Ensure that we have a homogeneous sample.
•We must use systems like a random number function or a random number table. Together with other several software
packages available that can be used to generate that sample list. This will help us ensure that we are getting a
representative sample that is truly random.

Cases where random sampling may not be used


Conversely, there are also times when we should not use simple random sampling.

•When the population is highly varied as that random sample may not take into account the variation within the process.
•In case we do not have a list available of the entire population. It may be hard to determine what the appropriate
sampling should be when we go into the random number table or a random number generator.
•In case, if interviews are required, then the sampling would truly not be random. Since we would have to target who the
interviews would be with and they would have to agree to conduct the interview.
•Another situation in which random sampling cannot be used is during a process changes. So if we have process changes
over time, random sampling would then pull values within those. This becomes difficult to distinguish when the process
changes have occurred, which might impact the results.
•In case the process is well understood, then there may not be a need to perform random sampling. As they may already
have enough data for the process.

Stratified Sampling
Stratified sampling is a very useful sampling approach that offers several benefits.

•It is more precise than simple random sampling, since we are taking into account different groupings.
•Stratified sampling is used for heterogeneous populations; this means within the population, we have multiple groups.
Therefore by using stratified sampling, we are taking into account each of those groups within the heterogeneous
population.
•When using stratified sampling, we use a proportion of the groupings within the sampling practices. This helps ensure
that we are reflecting each of those various groupings. In addition, the subgroups are developed so that they’re
homogeneous within the subgroup, more than they are within the main population.
•Stratified sampling offers a better control of the sample composition since we are taking into account the different groups
within the population. Additionally, since we are capturing data from each of the various groups, we can also use a
smaller sample size. Also we would able to capture representative differences.

Points to remember while performing Stratified Sampling


There are several key considerations that the Six Sigma team needs to take into account when using stratified
sampling.

•It is very essential for the team to select the correct strata. This means that they need to make sure they are choosing the
correct group for sampling.
•Also each of those groups, or strata, must be mutually exclusive and exhaustive, so as to have a homogeneous subgroup
of each of the subgroups.
•There should be no overlap between the subgroups. It’s becomes very important to capture the maximum differences
between each strata. As we would want to make sure there’s no confusion between the different groups.

Approaches in Stratified Sampling


Within stratified sampling, there are two approaches –

•Proportionate allocation: Based on these groups allocation, we try to understand how many are in each category. Then
we can make sure that we allocate the sampling so that it is proportional to how each group represented in the total
population. Then we would sample accordingly to make sure we’re getting a representative sample.
•Disproportionate allocation: This type of allocation, we would take larger samples from the strata with the greatest
variability. In this way we are able to capture information on the strata that has a key factor that we’re looking for, which
is variability. By doing this, we’re actually getting a higher precision in the results. Disproportionate allocation is
typically used when the sampling costs vary across the strata. Each of the different strata might have a different cost for
actually sampling those. We would want to capture the stratum that has the highest variability, or variation. And sample
more from that to help reduce the cost, which will give we increased precision in the results.

heck Sheets
One of the most commonly used forms for data collection is check sheets. Some of the features of check sheets
are –

•The check sheet is a simple method of data collection provided in a form.


•Check sheets are generally for operators or those directly involved with the process or service that’s being produced.
•Check sheets are used to gather up-to-date and timely information as the product or service is being produced and is used
during manual data collection.
•Check sheet typically requires no experience and little to no training. The information thus collected on a check sheet is
very useful when we are conducting simple analysis, such as pareto charts, histograms or run charts.
•Check sheets are commonly used with introduction to capture information, such as the number of defects that are
occurring in each step within a process.

Types of Check Sheets


Common types of check sheets – Basic, frequency plot, traveler, location and confirmation.

•Basic Check Sheet: A basic check sheet is used to capture instances of a problem over time. So we can use it to track or
count those instances. For instance, if we are running the production line, let us say we have one chipped part and then
later in the day, we have a scratched part. The next day, we might have two dented parts and the next day, we might have
a missing part. And then the next day, we could have three chipped parts. By tallying the information, we could have a
good idea of what’s happening and the kinds of defects that are observed over time.
•Frequency plot check sheet: The next type of check sheet is a frequency plot check sheet. This kind of check sheet is
used to detect unusual patterns and provide some sense of average and range without too much analysis. It can also be
used for further analysis.
•Traveler Check sheets: Another type of check sheet is a traveler check sheet. A traveler check sheet typically
accompanies the part or the batch of parts from one step in the process to another step in the process.
•Location Check Sheet: Subsequently we have the location check sheet that typically provides a schematic in the form of
an image, a drawing or a diagram of a part or step of the process that’s under investigation. This type of check sheet is
useful for identifying where in the process or a product a defect is occurring. The data collectors would mark on the
image where the defects are occurring to pinpoint the problem areas. Each of those areas will be given one mark on that
step of the process. Each time there’s a defect, a mark is added to the schematic. Location check sheets are sometimes
called measles charts due to the cluster of dots that marked the result.
•Confirmation Check Sheet: The final type of check sheet is a confirmation check sheet. Confirmation check sheets are
used to make sure that each step of the process is occurring. Such that each step of the process occurs, a check mark is
placed on the check sheet to indicate that the step was complete. Confirmation check sheets are useful, to track machine
maintenance, or to ensure that all parts of a multitask process have been completed so far.

Data Coding
Another useful tool within the process of data collection is data coding. Data coding is considered as one of the
best practices to ensure that we don’t have errors within the data collection process. Data coding is a way to
change how the data’s actually collected and reported by using coding values rather than the actual readings
which makes data collection much easier and more accurate. Let us consider an example, now coding can be the
transformation of data into a form understandable by computer software, such as creating a code for different
forms of a similar response to a question in a survey. Data coding also helps to avoid some of the issues with
transcription errors. In addition, it helps to reduce the sensitivity to rounding by using codlings rather than full
values.

Descriptive and Inferential Statistics


At all point of time, population and sample data are key concepts in statistics. The goal is to know everything
there is to know about a population. If we consider a group of products, this would mean capturing data on every
single product in the group. ‘Samples are a small subset of the overall population’. When we gather information
on the samples, we need to make sure that we have representative samples since the samples are going to be
used to make inferences about the population. Before we begin gathering data for the process improvement
projects, we need to understand how we are going to analyze that data. There are specific parameters for a
population. There are also specific statistics for a sample. Typically, the data characteristics we are looking for, is
the most commonly used ones are the mean, which is the arithmetic average, the standard deviation, and
variance. Standard deviation and variance are used to describe the amount of variation there is within a process.
There are subtle differences between population and sample statistic parameters. For population, the mean is

represented as mu (µ). For standard deviation, we use sigma ( ) and for variance, we use sigma squared (

).

In contrast, for sample statistic parameters, mean is represented as x bar ( ) , standard deviation is s, and

variance is s squared ( ).

It is essential to distinguish between descriptive and inferential statistics.

Descriptive Statistics
Descriptive statistics are primarily used to describe the process itself. It’s good to give descriptions about the
process in terms of statistics. One of the most commonly used tools in descriptive statistics is the histogram,
which provides a considerable amount of information about the data and can be used to communicate how the
process is operating. Histograms are also used for decision making. In a histogram, we can examine the curve of
the data to identify how much variation there is within the process. We can also identify the centering of the
process by looking at the mean of the data. So in descriptive statistics, tools like histograms provide a
description of how the process is operating.

Inferential Statistics
In inferential statistics, we are taking samples from the population and making inferences from that sample data
about the population.  Let us suppose we have 30 data points such that with these 30 data points, we can
calculate the mean, the standard deviation, and the standard error around the mean. From the information, we
can calculate confidence intervals and other information. Although in most cases more data is better, it is
possible to learn a lot of directional information about a process with as little as 30 data points.

Difference between Differential and Inferential Statistics

Basis Differential Statistics Inferential Statistics


In descriptive statistics, the
approach is more inductive. So  In inferential statistics, the
Approach approach is deductive. We’re
we’re trying to induce
information. trying to deduce information.

Goal The goal of descriptive statistics The goal is to infer population


is to present or summarize data
characteristics from the sample
to make decisions about the
data points to predict future
present situation, about the
outcomes.
current process.
We will typically be using more
advanced, more complex
Common tools and techniques in
statistical tools such as chi-
descriptive statistics are
square, binomial. Poisson
Tools / Techniques histograms, interrelationship
distributions, hypothesis testing,
diagrams, process maps, and
confidence intervals,
fishbone diagrams.
correlations, or regression
analysis.
Finally, descriptive statistics are
We are trying to predict future
fairly straightforward. The
outcomes. The amount of
Interpretation amount of information needed
information needed and difficulty
and difficulty in creating these
in creating these are complex.
are not complex.

Central Limit Theorem


We now consider the central limit theorem, how it’s linked to the normal distribution and its significance in
inferential statistics. Before starting with the central limit theorem it is essential to understand normal
distribution. Now, in normal distribution most of the values in the data set are close the average of the data.
Therefore, the standard deviation for the normal distribution is small. The normal distribution also allows for easy
inference about the population based on a sample. Within normal distribution we can use the data from the
samples to make inferences about the total population. Normal distribution is also commonly known as the bell-
shaped curve.  A bell-shaped curve depicting the normal distribution displays since it looks like a bell shape. If we
examine a process with standard deviation 68.26% of the data falls within plus or minus one standard deviation
or sigma level. Therefore, with plus or minus two standard deviations we are capturing 95.44% of the distribution.
Such that, within plus or minus three standard deviations. We are capturing 99.74% of the distribution. This
concept is known as the 68-95-99.7% rule. With normal distribution, we can sample from within these values. And
make an inference about the total population.
Well, the basic view of the central limit theorem state that the sampling distribution of the mean approaches a
normal distribution as sample size n increases.

Our ‘N’ is a sample size for the sample mean. So when N = 30, which is typically regarded as a statistically
significant number of samples. The distribution is approximately normal. As the sample size increases, we can
apply statistical inferences and analysis to sample data as to whether or not the population is normally
distributed.
Inferential Statistics Tools
There are several commonly used tools for inferential statistics, including confidence intervals, hypothesis testing
and control charts.

•Confidence Interval: Confidence intervals are used to state some level of confidence that the mean of the population falls
within a certain range. A bell-shape curve depicting the confidence intervals displays. To determine the confidence levels,
we would collect data from samples. We would use that information to determine the mean and the standard deviation of
the process. From this information, then we would be able to make an inference about the total population.
•Hypothesis Testing: Another tool that can be used in inferential statistics is hypothesis testing. Hypothesis testing is used
to test a null hypothesis. A null hypothesis is some state of nature for which we don’t necessarily know the true outcome.
But which we would want to test with a certain level of confidence. The null hypothesis (Ho) is typically set up to test if
two values are equal, one is less than or equal to, or one is greater than or equal to. This can be used to compare the means
of a process or of the standard deviations of a process. The alternative hypothesis, (Ha), is the alternate of the null
hypothesis. Therefore, we would be stating that the two values are not equal. Or that one value is greater than or less than
the other value. Within hypothesis testing, we would be using data from a sample to infer the true state of the population.
•Control Charts: One more commonly used tool in inferential statistics is a control chart. In a control chart, we are
typically plotting data that we’re pulling at some rate and with that data; we’re pooling samples such that on the x-axis,
we capture values. We are using sample data to make an inference about how the population of this process is running.
With a control chart, we are able to capture the center line, which is the mean of the process. Based on the sample data,
we calculate upper and lower control limits, which are statistically calculated based on the mean and the variation within
the data. This information is used again, to infer information from the sample about the entire population.

Measures of Central Tendency


With the beginning of the process of data collection it is critical to understand the various measures of central
tendency. “Central tendency refers to whether or not the center of the process falls close to the target.” So with central
tendency we’re really looking at the centering of the process. Where on the other hand we have the concept of
dispersion, with dispersion we are trying to understand how much variation we have within the process. These
concepts are important since with Six Sigma what we are trying to do is have on target performance. So the
center of the process is equal to the target of the process. The primary objective is to perform this with as little
variation as possible. In which case by reducing the variation we are providing consistent output to the customer,
and also consistently meeting their expectations. So we really need both measures.
We must consider closely the concept of central tendency and dispersion when we are depicting the dataset.

There are three key measures within central tendency, mean, median, and mode. The mode of the process is the
value that occurs most frequently. The median value is the middle value based on the ordering of all of the
different values and the mean value is the arithmetic average.

Now, let’s talk about how each of these are calculated. When we calculate the mean, since it’s the arithmetic
average, we add up all of the values or samples and divide that by the number of values we have. The median is
calculated by taking the number of samples we have plus one, and then dividing that sum by two. The mode is
the most common value. It’s the value that appears the most. We find that value over the range of values that are
put in order from smallest to largest.

 
Illustration: Calculate the measures of central tendency such that we have sample data for the diameters of
seven randomly selected bearings for a morning shift in a manufacturing facility. The values are 7.52, 7.36, 7.54,
7.41, 7.52, 7.30, and 7.36.

Mean = 

Median
In order to calculate the median we arrange the numbers in ascending order. Therefore, for the median
calculation, we are going to take those seven values, and put them in order.

7.30, 7.36, 7.36, 7.41, 7.52, 7.52, 7.54

Median = 

We use the fourth data point, which will be the 7.41, median

Mode
The mode is the most repeated value. Given the seven data points, there are two values that have a frequency of
two. Those values are 7.36 and 7.52. The mode is the most repeated value and as we can’t have more than one
mode. Now since we have two different modes, this means it’s a bimodal distribution.

Measures of Dispersion
We shall now be discussing more about dispersion. Measure of dispersion is helpful to in the practice of Six
Sigma execution to compare one process to another. Also we use it to understand how the data behaves for each
and which process is more tightly grouped around the mean as compared to another. There are three main
measures of dispersion including – Range, Standard deviation, and Variance.


Range: The first measure of dispersion is range which is calculated by subtracting the minimum value from the
maximum value. To calculate the range, we would first start by finding the minimum value, which in this case and then
finding the maximum value. So to calculate the range, we would subtract the minimum value, from the maximum value.
This value is useful for comparative purposes to another data set for a process, and to understand differences in
dispersion.

Standard Deviation: While range provides us with some information about the dispersion. Standard deviation provides
us with a little bit more information about how much each data point varies from the mean of the process. To calculate
standard deviation, we look at the distance from each data point to the mean or the process and then use that value to
calculate how much variation there is within the process. Note that the standard deviation values on the normal curve are
also known as the ‘sigma values’.
Let’s take a minute to talk about how standard deviation is calculated. The standard deviation is calculated by
looking at the difference from each individual value, each data point, which in this

Illustration: Suppose we have 5 data points, (13, 18, 14, 28, and 23). Find Mean and Standard Deviation.
Solution:

So we get a value is 6.3 which shows the variation we have in the process. Ideally, the smaller the number, the
lesser the amount of variation we have within the process.


Variance: Variance is a measure of the dispersion, considered as a very useful tool when we are trying to understand how
much the variation is within the process. Note the variance is not taken as the square root so it is not expressed in the units
of data. Variance is central to projects, since one of the key goals within Six Sigma is to reduce the variation around the
mean.

Frequency Distribution Table


Frequency distribution tables are one of the ways we can look at the dispersion of the data. In general, data used
for frequency distribution tables is collected with a check sheet. The information gathered can be displayed it in a
variety of different ways –

•Pie-Chart: Simple pie chart to display the frequency data


•Histogram: Histogram is used to show the breakdown of the classes by day. The histogram is probably the most
frequently used way to show frequency distribution information.

Steps involved in building a frequency distribution data


There are three columns and various rows in the table the column headers are interval, tally, and frequency. 
•First step involves organizing the data into class intervals. It’s important to note that each of these intervals should be
mutually exclusive to make sure that the intervals are setup it such a way that a data point could not fall into multiple
categories.
•Second step involves recording the data in the tally column. As we are collecting the data, we would typically use this as
a check sheet to record what the values are and then capture that information.
•Third and The final step involve calculating the frequency.

As a common rule of thumb we could take the square root of the number of data points to determine the number
of classes. Now, to determine the class interval, we take the range of the data, which is the maximum value
minus minimum value. And we divide the resulting value by the number of classes. When we set up the class
intervals, ensure that they are mutually exclusive, as we would not want to have any overlapping class values.
Every data point should fall specifically in one interval. It’s important to make sure we’re also accommodating all
the data points. One of the reasons we are looking at the range of values when we determine the class intervals.
The range makes sure that we take into account the maximum value, all the way through to the minimum value.
So we should be able to accommodate all of the data points.

Cumulative Frequency Distribution


The cumulative frequency distribution helps to build off of the frequency distribution table and provides
information on the cumulative frequency. Cumulative frequency is used to determine the number of observations
that lie above or below a particular value in a data set and can be helpful in understanding the behavior of the
data. It is considered as a very useful method then if we want to understand what a certain percentage of values
fall beneath. The frequency distribution table adds in additional columns to provide for the cumulative frequency
and the cumulative percentage. Then, by using the information collected, we can calculate the cumulative
percentage. The cumulative percentage is calculated by taking the cumulative frequency column and then
dividing it by the total number of values in the data set. Then that value is multiplied by 100 to give the
percentage. As a side note, a shortcut in calculating the cumulative frequency is to divide the amount of overall
distribution accounted for up to that point by the sample size +1.

Scatter Diagrams
Scatter diagrams can be defined as method to graphically understand if there is a relationship between two
different variables. Independent variables (casual variables) are expressed on the X-axis and the dependent
variables (result variables) are expressed on the Y-axis. In the process of plotting variables where each variable is
the x, y coordinate, and then we are going to try to understand whether or not there is a relationship between each
of these variables as they’re plotted.

We should then determine if there’s a common pattern in order to calculate and understand their relationship. We
can also use a scatter diagram to calculate a relationship equation such that we could predict the output variable
based on the process input. We can test the hypothesis by plotting these two variables in a scatter diagram to
determine if there’s a high correlation, represented by a straight line. The straight line is determined by calculating
the best fit line, where we are reducing the distance between each data point and that line. Then based on the
slope of the line, we’re able to calculate whether those variables are correlated or not correlated. When we have
two variables that are highly correlated, this is called a high positive.

This happens when we have closely grouped points. That line is calculated by finding the best fit line between
each data point and that line. A high positive is when that line is ascending from left to right. This indicates that
there’s a strong relationship. This means that when we increase this process parameter, we are also increasing
the output characteristic. Now, a high negative would be represented by the reverse direction of the line.  We still
have closely grouped points, but the line is descending from left to right. We can still tell that we have a strong
relationship based on that tightly grouped set of data points. Now as we increase the process, in this case, then
we’re decreasing the output characteristic. It also important to know that as we are determining relationships
between variables then them we can also have low or no correlations. When we have low correlation, we have
data points that are widely scattered. So the distance between each data point and the line is very large. The line
is flat or has a mildly increasing or decreasing character to it.
Normal Probability Plots
We can define probability plot as a graphical way of comparing two data sets based on empirical observations or
data collected. While using the probability plot, we are using information that was typically collected from the
scatter plot and building off of that to create the normal probability plots.  When we create the probability plots,
we try to understand whether or not we have a normal distribution. It is essential that as we move forward with
some of the advanced statistical techniques, we need to understand if the data is normal. Now it is not normal, in
some of these instances we’re going to have to transform the data or use more advanced statistical techniques.
The probability plot is considered very beneficial over other graphical techniques because it can use small
samples and is also very easy to implement. When we look at the probability plot on the x axis, this is the value
that we are trying to understand whether this data is normal or not normal? On the y axis, we are looking at the
probabilities. They come from the cumulative probabilities that we calculated with we cumulative frequency
functions.

On the x axis we would have the variable, and on the y axis, we’re plotting the cumulative probability. The line is
determined by finding the best fit line, which is that minimum distance between each data point and the line and
any deviation signifying a difference from that line. If a plotted point deviates significantly from the straight line,
especially at the ends, then the data does not fit the normal distribution, which is typically what we’re trying to
prove. When we set up a probability plot diagram, we’re hypothesizing that we have a normal distribution.

Histograms
Histogram can be defined as a way to graphically display information from the frequency distribution. It provides
useful information on whether or not the frequency is normal. So we will show whether there is going to be
change in place. If something has happened to the process we will be able to see a change within the data.
Histograms are also used to compare outputs from two different processes. The x axis depicts the mutual
exclusive variable and the y-axis represents the frequency values. Note that the x-axis could be either intervals
from the frequency distribution or a specified value. We can set this up by actually using this as a check sheet. As
values fall within a certain point, we could put an x, or we could fill in a circle. This tells us that frequency, which
we could tally. When we’re finalizing the histogram, we simply replace this group of x’s or dots with a bar at the
same height and that gives us the histogram.

We can also use the data to determine the dispersion or spread of the data. And we can also see if we had
changes in the data. Note that if we have a single distribution that’s normal or we can start looking for bimodal
distributions. The histogram also gives us information on the central tendency and dispersion of the data.
Interpreting a histogram also involves examining the mean, mode, range and outliers for basic patterns. We can
determine the mode by looking for the tallest peak of the distribution. We should also examine the histogram to
determine if there is one peak (unimodal distribution), or multiple peaks (bimodal distribution), and also where
the peaks are located. Now, the mean is determined by examining the horizontal axis of the histogram. Also the
histogram also gives us the information on the range of the data and any outliers that might be present in the
dataset. If we have a value that was way off to the left or right, then that might be one of the outliers. If we had a
value that was outside of most of the values within the histogram, that would also be considered an outlier.

Stem-and-leaf Plots
Another graphical method used for data representation is stem and leaf plots considered as a useful way to look
at the distribution or dispersion of data, which are considered very similar to a histogram. In the process of
creating a stem and leaf plot, we take the data set and then break it down into groups.  The column headers are
stem and leaf such that the stem is referred to as the initial part of the data. Stem and leaf plots are considered
useful as we take this data and consider the hat leaf portion to see the relative frequency. Therefore by listing just
that final number, we’re able to look at the distribution of the data. Stem and leaf plots can also be used to
compare different distributions.
  We now compare limitations and benefits of stem and leaf plots
•The key benefit of stem and leaf plots is that they allow for a very quick overview of the data.
•Stem and Leaf plot helps to highlight any outliers that might be in the data.
•Stem and leaf plots are very useful for many types of data and can be used for variable and categorical data.
•Stem and leaf plot are not good for small data sets. When we have small data sets, a stem and leaf plot would not make it
easy to see a pattern clearly stand out.
•They are not good for very large data sets either, since there might be too much data in the plot to see the pattern very
well.

Therefore in order to have a very effective stem and leaf plot, we would want in mid-range dataset that allows
proper understanding and getting a quick overview of whether or not the data is falling within normal distribution.

Box-and-whisker Plots
Box and whiskers diagram also referred to as the box plot is considered as a very useful means to summarize
and compare data. It is referred to as a box and whiskers since it has the box that captures the middle 50% of the
data. Box and whiskers plot are considered very useful in showing how much variation we have within the data, since
it captures the lower 25% of the data, the range of the middle 50% of the data, and then the upper 25% in the
upper whisker. It shows how spread out the data is and then also where the majority the data lies.

We now discuss the various elements of the box and whiskers plot and how the data is used to construct it.
Thereby when we calculate the data the first thing we would want to determine is the median of the data. The
median falls in the middle point of the data. The lower whisker is that bottom 25% of the data. So we would
calculate the median and then the median value between that median and the lower point. That gives us the first
quartile. Below that is the 25% of the data. We would also then calculate the median between the overall median
and the upper data point. And that would give us the third quartile. What’s included in the box is the inter-quartile
range, and that’s the middle 50% of the data. Box and whiskers plots are considered very useful when we are
comparing two different samples of data

Measurement Systems
One of the key elements within Six Sigma is to have a good measurement system. At all point of time we aim
towards improving the processes, to get the mean of the process on target, and thereby reduce variation.
Therefore one must ensure that the data we are collecting is being collected by an adequate measurement
system. A measurement system comprises all of the actions that are used to collect data about the process, the
product and the service. Each measurement system contains very diverse elements. These elements include
people, standards, devices, elements, instruments, and methods that together make up the process used to
obtain measurements. For instance, if we are trying to improve a production process or we are having issues with
the process being out of bound, then one must ensure that the measurement system is adequate, reliable and
correct before we start collecting data.

The primary objective of Six Sigma project is to avoid wasting time, money, and effort in collecting data if we do
not have a good measurement system, since any decisions that we make based on that data would be incorrect.
Finally, we want to use the measurement system analysis to determine what percentage or variability is caused
by measurement errors. The goal of a measurement system is to give consistent and correct data. Since the
decisions through the Six Sigma projects will be based on what we are getting from the measurement systems.
The goal is to ensure that the data collected reflects the actual variation in the process, product or service. This is
also known as part-to-part variation, being measured. Also, we would want to ensure that the measurement
system is not adding any variation to the process. Therefore having a consistent or precise measurement system
means that we must ensure that the system is capturing the right data.

The objective here is to make sure that we are getting an accurate measurement of the variable. Also we must
aim to avoid or reduce measurement system variation. Indeed there are several causes of measurement system
errors that could cause the systems to become inadequate. It may be a process, equipment, measurement tools,
devices, gauges, or operator error. Other some causes of measurement error include poor procedures, or poor
understanding of processes and procedures themselves. We could also expect some surprises in the future from
unexpected sources. Environmental conditions like temperature and humidity can unexpectedly disrupt our
measurement systems. Further there is no end to the sources of human induced variations in any process.

Now, the two key attributes of a measurement system are accuracy and precision.


Accuracy: Accuracy can be defined as the ability of a measurement system tool to measure the true value. In terms of
accuracy, we are trying to ensure we have the correct data. Accuracy is considered as an attribute of the measurement tool
or the gauge itself. In other words, in accuracy we are trying to capture the true value of the variable such that the focus is
on the average of all of the variables.

Precision: Precision is the amount of variation in the measurement system when repeated measurements of the same
variable of interest for a process or a product or parts themselves are taken using the same device. In precision, we would
want to ensure that we get consistent data with minimal variation. Precision is an attribute of the measurement system
itself, such that we are collecting information on the same variable with the same device. For which we should get those
repeated measurements.
To illustrate the concept of accuracy and precision we use an example of Dartboard and bullseye.

Case1: When we are throwing darts and those darts are landing all over the place, this means the system is not
accurate or it’s not precise.

Case2: If we are throwing darts and they are getting a group around the bullseye, this means the accuracy has
improved, but it’s not precise, because the grouping of darts are still scattered.

Case 3: We consider a situation where the darts are grouped closer together, and therefore, it’s precise. But the
grouping is not near the bullseye and therefore it’s not accurate.
Our aim is to achieve a tight grouping around the bullseye. This indicates we have a system that’s precise and
accurate.

A Measurement system analysis has two key components – analysis of the precision and the analysis of the
accuracy.

•There are two measures within precision, the measure of repeatability and the measure of reproducibility. Where,
Repeatability refers to the ability to repeat the same results over and over again and Reproducibility refers to having
multiple individuals being able to reproduce the same results.
•Accuracy is made up of bias and linearity. Where Bias is the difference between the value we obtain from the reading of
the gauge from a known standard and linearity is a measure over the full range of the gauge specifications.

Measurement Correlation
Measurement correlation is primarily used with the measurement systems analysis to compare values from
different measurement systems. The measurement correlation is used to assess a few things such as,

•Measurement Correlation is used to assess the values we get from the gauges themselves compared to a known standard.
•Measurement Correlation could also use it to compare two different measurement devices against each other.
•We could also use measurement system correlation to compare if there are differences between operators.

Therefore an understanding of each of these areas is a must to understand how good the measurement system
is, and how we can better improve the accuracy and precision of the gauges.

We can perform assessment to compare the values from the gauges to a known standard. This is typically
performed during the calibration. Many a times, we bring the gauges into a temperature and humidity controlled
environment in order to calibrate them against a known standard. This is typically done with calibrated gauge
blocks that have been calibrated by an outside source. Then using these calibrated gauge blocks to further
calibrate the gauges to make sure they give us correct readings based on those known standards. Therefore by
using these devices, we would measure those known items to reveal what that error within the device is.
Measurement correlation also helps in establishing a degree of agreement between those two measurers.
Therefore we can also compare the measurements of one operator against another or even the measurements of
the same operator over time
Repeatability and Reproducibility
Measurement system error occurs due to lack of accuracy and precision. We shall discuss in detail precision that
relates to variation. Now, the total measured variation originates from a process variation, which is part-to-part
variation, and also the measurement system variation. Where the measurement system variation is caused by,

•Repeatability variation, which is the gauge of the measuring device itself and those differences. When we talk about
repeatability, this is based on the same operator.
•Reproducibility variation is the differences from operator to operator. When we talk about reproducibility variation,
we’re looking at comparing different operators.

We will discuss about these concepts as expressed as variance equations. We need to remember that the goal is
to remove or reduce the measurement system variability so that we can focus on reducing the process variability.
We want to be able to understand the variation within the product itself. Now, when we look at the total variation,
the total variation is the sum of the variation from the process and the variation of the measurement system, and
such that we are trying to reduce the variation caused from the measurement system. The sigma squared is the
variance, i.e., variation in the process. Now, when we consider the measurement system variation, it is equal to
the sum of the variation from repeatability and the variation from the reproducibility.

Now we investigate the various attributes of repeatability and reproducibility variation types.

•Repeatability also known as equipment variation, as we are using the same operator, we have the same part. We’re also
using the same device, and we’re measuring multiple time. This is done because we can get a different viewing angle, and
tools sometimes wear out, but we should be able to perform a measurement using the same operator, part, and device, and
get the same results. This means we should be able to repeat the same results within that same operator.
•On the other hand, Reproducibility also referred as appraiser variation, since we are using different operators. Such that
it has the same part, and the same device, which is being measured multiple times.

The chief point of distinction between repeatability and reproducibility is that in reproducibility we’re using
different operators, whereas in repeatability, we have the same operator. We do this so that we are able to
capture if any difference in training given to different operators, or there are different procedures being followed.
So when the variation comes from reproducibility it’s typically a sign that we need to improve the training or the
procedures for the gauging.

Conducting GR&R Studies


Primary goals for conducting a gauge repeatability and reproducibility study are –

•Assess the precision of the measurement system.


•Determine the percentage of variation caused by the measurement system in the overall variation that’s been observed.
•Determine the focus for minimizing the measurement system variation.

Steps in gauge R&R


1.
Prepare: In the first step, ensure that the gauge or device is calibrated or standardized. Such that we duplicate production
conditions for the study. Also ensure that the gauge is capable in making accurate readings.
2.
Collect resources: The second step in a gauge R&R study is to collect the resources. This involves getting all of the
resources that are needed for the study. Including the number of operators, the number of parts to test, and the number of
repeated readings to take. These are also known as the trials.
3.
Collect Data: The third key step is to collect the data. During the data collection exercise, we want to ensure consistency and
avoid any bias among the operators and recordings.
4.
Calculate repeatability and reproducibility: These calculations are typically performed using computer software. But they
can also be calculated manually. The results on how they’re presented may vary slightly depending on the software program
that would be used and in case of manual calculations; it depends on the rounding standards followed.
So which ever method is chosen, ensure that it’s consistently applied.

Interpreting GR&R Study Graphs and Tables


Total observed variation is comprised of the part-to-part variation which is the variation from the parts and the
measurement system variation. The measurement system variation is further broken down into the repeatability
and the reproducibility. Part to part variation is the variation that happens normally within the processes. While
performing the gauge repeatability and reproducibility study we must ensure that the parts that we are using for
the study are representative of the entire specification limits. We also want to make certain that we have parts
outside the specification limits as these will help ascertain that the measurement system is capable of picking up
that wide range of possible values. Where, the total variation is equal to the sum of the part-to-part, repeatability,
and reproducibility variation.

Precision to Tolerance (P/T) Ratio


We have two analytical methods that allows analyzing and interpreting measurement system capability.

•Precision to total ratio: Precision to total ratio is equal to the variation from the measurement divided by the total
variation.
•Precision to tolerance ratio: As a part of the calculations, we need to understand the tolerance. Here, the tolerance of the
process is equal to the upper specification limit minus the lower specification limit. That provides the range for the
tolerance. Now, the specification limit should be set by the customer based on what their expectations are for the product
or service. Now by considering the difference between the upper and lower specification limits, we can determine the
allowable tolerance based on the customer expectations. The precision to tolerance ratio formula is equal to six times the
estimated measurement error divided by the total tolerance. Where total tolerance is the upper specification limit minus
the lower specification limit. The equation is also equal to the measurement variability divided by the sum of the
variations. In other words, this is equal to the precision divided by the total tolerance.

Introduction to Bias
As we know that gauge repeatability and reproducibility studies, measure the precision or variation related to the
aspects of a measurement system. On the other hand bias and linearity study is used to access the accuracy of
the measurement system. Now, in a bias and linearity study, the aim is to assess the accuracy of a measurement
system by comparing the measurements made by the gauge or the measurement tool to a set of known
reference values. The sole difference is that we are using the known reference tables instead of the gauge
repeatability and reproducibility values that we were using to measure actual production parts. We then compare
these values to the total variation. While talking about bias, we’re considering the difference between a known
reference value and the average measurement from the gauge system. Bias, is equal to the average measured
value minus that reference value. Such that the bias the difference between bias and accuracy and precision is
that bias is inversely proportional to accuracy. However, bias does not affect precision since we get the same
repeated values over and over again.

While conducting a bias study, there are four main steps that we need to follow.

•In the first step we choose a set of parts or variables of interest for the study. The parts should be chosen in a way that
minimizes process variation, and ideally, we want to choose parts that are created close together in the same production
run. This will minimize bias that may be due to process variation.
•In the second step we choose the operator. We want to choose an operator or appraiser who is experienced with the
measurement instrument, and trained in the standard operating procedure for measuring the part. This reduces the risk of
bias caused by the operator rather than the gauge itself.
•In the third step we monitor the study and record the measurements. We also need to randomize the parts and make sure
that the operator measures them from a minimum of 10 to 12 times. Thereafter we read the gauge and hide the operator’s
readings from the operator to minimize the risk that he or she will bias any subsequent measurement.
•The last and final step involves calculating the bias. We calculate the amount by which the measurement is off from the
reference value or standard. So we take the average measurement minus the reference value to calculate the bias value.

Calculating Bias
We calculate the bias by summing up all of the individual readings and then dividing by n, which is the number of
times a standard is measured. We then subtract T, which is the value of the standard or reference. Once we
calculate the average we are ready to calculate the bias. Bias equals to the average measurement minus the
reference value. Now, when we look at the bias we could have a negative or a positive bias. A negative bias
means that the measurements are less than the reference value. On the other hand, a positive bias means that
the measurements are greater than the reference value. Bias can be expressed as the percentage of the total
process variation or as a percentage of the tolerance set for the process. When we examine the calculation we
can express it as a percentage of the process variation by taking the bias, dividing that by the total process
variation and multiplying by 100.

Introducing Linearity
It is very essential to examine the measurement systems for linearity. We can define linearity as a change in the
bias over the operating range of the measurement device. Were the system should be equally accurate for all
measuring levels. For example, it wouldn’t be appropriate if we have a measuring device that might be accurate
measuring smaller parts of equipment out of an assembly line. But not the larger ones or a thermometer that
measures high values of the oven temperature without any bias but fails to do so at the lower temperatures. Here
we are checking the linearity to ensure that we are getting those accurate readings for all levels of the
measurement devices. One way to do this is to measure ten parts five times each to see the difference. A gauge
that’s not linear might require calibration, or replacement.

Steps in Measuring Linearity


There are five key steps in performing a linearity study. Though the process is similar to calculating bias, there
are slight differences.


Choose parts, or variables: Ensure that the parts we choose cover the entire operating range. Low, medium or high.

Choose an Operator: During the second step, we would choose an operator, as we do in the bias study.

Monitor and Record: The third and fourth steps, involves monitoring tests, record data and calculate bias for each part
as we do in the bias study.

Perform Linear Regression: In the fifth step, we would perform linear regression and of course, interpret the results. So,
when we analyze the linear regression graph, if the slope is different from 1. That means the gauge is non-linear. If the
intercept is different from 0, then the gauge is biased.
Let us say a Lean Six Sigma team at a tool manufacturing company is analyzing a measurement system. For this
the team chooses three specific parts since they represent the entire range of the measurement system. Attain
ten measures, each part eight times and analyze this information against the reference value for each of those
parts. They do this to see if any linearity and bias exist in the measurement system. The team could then examine
the bias and check how the bias changes over the reference tables. Since the bias changes over the reference
values, the team can state that there is linearity within the gauge system. In other words, that linearity is present.
The team can also tell this by looking at the change in the bias over the referenced averages. We shall now
discuss some of the basic outcomes of a linearity study.  The slope line tells us, whether or not we have the
presence of linearity. The resulting linear regression graph can be interpreted to understand how much linearity
exists within a process. The bias for each part is plotted on the Y axis against the reference values at different
levels on the X axis. To interpret linearity, we need to assess the slope of the resulting regression line. The two
most common patterns are a sloped line and a straight line. If we have a sloped line, this indicates inconsistency
and presence of linearity. If we have a straight line, the straight line, or horizontal line indicates an absence of
linearity. This means no linearity in the process. Linearity is denoted by the symbol L and is equal to the slope

Percent agreement is used to assess a measurement system. Measurement systems and measurement devices
are good for continuous data. However, we don’t always have that continuous data. Sometimes we need to be
able to measure attributes or discrete data. We would do this by percent agreement analysis. In the case of
attributes or discrete data, where percent agreement is used, the appraisers do not use an instrument to measure
parts or variables. But they assign their own rating values. For instance, we could have a gauge that’s used to tell
if the part is simply good or bad, or if it’s oversized or undersized. Now, based on that, we would only have
attribute data. By using percent agreement analysis, we can see how good the system still is in terms of actual
variables themselves, the values themselves.

There are three different attribute of data which include physical, property, and resources. For example, we could
have color (red, green, yellow), sizes (small, medium, or large), gender (male or female), result (pass or fail). Now,
discrete value could also be binary, nominal, or ordinal data. So when we talk about binary data, we are talking
about yes or no type data. We could also have nominal data. Nominal data gives us a more categorical type of
information. For instance, we could have the reasons for account cancellation and the appraiser rating could the
APR, the finance charge, or a late fee.  Ordinal data gives us information on a rating scale.  We could have
appraisers that rate the quality on a scale of 1 to 5, where 1 is poor, and 5, which is excellent. It’s important to
understand when it’s best to use percent agreement analysis. We could use it to evaluate the consistency of
individual appraiser ratings and those measurements. What this would mean is that we would have the same
appraiser responding to the same question differently over a course of time to see how consistent that is. The
second use for percent agreement analysis is to evaluate consistency ratings across all appraisers. For example,
we might have one measurement that determines that 90 out of 100 items pass inspection. While another
measure determines that only 85 out of 100 items pass inspection. If we have a total of 75 of the objects passed,
we’re passed by both measures, and the agreement between the measures could be 75%. Finally, we can use
percent agreement analysis to compare appraiser ratings to the reference or the true value. For example, if we
have a team that uses its measuring device to measure an object with a known standard weight of 5.98 grams. If
the team obtains a value of 5.9 grams in 85 out of the 100 measurements, the agreement with the reference
standard would be 85%.

Process Performance and Capability


We shall now discuss the process performance and process capability in this topic. Within the DMAIC cycle it is
important to understand that our aim is to measure process performance, and capability. When we look at the
measure phase, this is where we need to understand the baseline of the process. By doing this, we’ll be able to
determine the current state of the projects, as well as where we need to make improvements. Once we
understand the baseline, we can determine what improvements we need to make. Now, as we make those
improvements, we are going to use the process performance to see if we have a positive or negative impact
based on the changes to the system. In conclusion, we also measure the process performance in the control
phase. This is where we monitor the process improvements to make sure that we have been able to sustain them
over time.

It is also essential to understand the performance of the current process and what it means in terms of natural
process limits in Six Sigma. Natural process limits are based on the natural variation that occurs within the
process. Here, a sample bell-shaped curve depicting natural process limits chart displays – Lower process limit
that lies to the left of the curve, and upper process limit lies to the right of the curve where the process limits
indicate the natural process spread which comes from the voice of the process.
Such that the statistical data from the process is used to determine the process limits and the natural process
spread. With reference to readings from the actual process, we can determine the natural process limits and the
natural process spread. Also it is crucial to understand that it is different than the specification limits. Now, when
we look at process capability, we are trying to compare the specifications to the process performance. This is the
corner stone of many improvement initiatives within Six Sigma which shows how capable the process is of
meeting the specification limits. It is important to understand and realize why we want the process to be capable,
and what the cost of the process is when it’s not capable. Therefore by analyzing the process capability, we’re
comparing a specification width to the process width.

Components of process capability include – Specification limits, Process spread, Process Limits.

•Specification Limit: Specification limits are those customers’ expectations or business design specifications determined
by a company, a person, or the customer.
•Process Spread: Process spread shows the extent of the process variation that result from both special and common cause
variation.
•Process Limit: The process limits, represent a natural process variation, determined by the process itself.

While examining a Six Sigma process, we are trying to do is fit in six standard deviations between the mean of the
process and the closest specification limit. A bell-shape curve depicting the Six Sigma Level Process displays. It
includes the process limits, specification limits, and the mean. The mean lies in the center of the bell curve. This
thereby helps in reducing process spread and having on target performance so we can fit the distribution of the
process within the limits comfortably. We can define a Six Sigma process means we are running in 99.997%
defect free, or at a rate of 3.4 defects per million. This thereby enables to reach those aggressive targets because
the goal of Six Sigma is to have on target performance where the mean and the target are equal or close to equal.
This leads to very little variation within the process so that we could fit those six standard deviations between the
mean of the process and the closest specification limit.
Performance Metrics
We shall now discuss the different performance metrics and categories in the DMAIC phases. It is very crucial to
understand in which stages of the DMAIC methodology we use performance metrics.

•Define Phase: In the define phase, we start to understand about the process and that we need to make improvements. This
depends on data driven decision to define the problem statement.
•Measure Phase: In the measure phase, we would use the performance metrics to gather more information about the
baseline in order to understand where the process is currently operating.
•Analyze Phase: In the analyze phase, we look for variables of factors within the process that impact the mean or the
variation around these performance metrics.

These performance metrics are considered very helpful in tracking the current process performance and to
analyze issues within the process, product, or service. There are primarily two methods to measure process
performance such that both of these ways result in a sigma value.

•Measure Variability: In the first method we measure the variability for which we would use continuous data. This would
provide a predictive measure of the process performance. Since we are using continuous data, we could calculate a
capability index. Measuring variability is also useful for designing, evaluating, and optimizing the performance. Since we
have continuous data, continuous variables, we can use that information to move the mean of the process closer to the
target and also reduce the variation.
•Measuring Process Performance: In the second method we measure the process performance to count defects. This
method of measuring performance provides an outcome measure, since we are using discrete data. Such that the product
is either good or bad. Also there would not be any use of complex calculation considered as a very useful for attribute or
discrete data.

There are several types of performance metrics used in Six Sigma such that the choice of the performance metric
depends on whether we need to count defects or use continuous data to measure the variability of the process.
Some of the key metrics that relate to counting defects include – defects per unit, roll throughput yield, parts per
million, and defects per million opportunities. Metrics that are based on the variability of the process and that are
directly related to the process capability indices include CP or CPK. Another key performance metric is the cost
of poor quality.

Conducting a Process Capability Study


We shall now discuss the purpose of a process capability study, and the steps involved in a process capability
study. The objective of conducting process capability study is to measure the ability of the process we are trying
to improve and meet customer requirements. Process Capability study provides a baseline for process
improvement by determining if the process is currently capable or not, and how the current process meets those
customer specifications. By performing a capability study, we will be able to understand whether the process is
on target, or has too much variation. In addition, once we have that baseline, we will be able to apply the
resources within the organization to make sure that we are successful from a short term and a long term
perspective. We will also be able to ensure meeting the customer satisfaction requirements.

Five key steps in performing a process capability study.

1. Determine the specifications which represents the characteristics that we are going to measure for the
process capability study.
2. Verify assumptions the process capability study, to test that the data is stable and normal. Her we are verifying
the assumptions that the process is stable and normal before moving forward.
3. Gather data in which case we would want to get a representative data sample. For this we should develop a
sampling plan ahead of time so that we get that representative sample.
4. Next step involves calculating the capability. In which case we would perform the calculation to determine the
process capability and also the Sigma levels.
5. The final step, based on whether or not the process is capable, we will make recommendations. If the process
is capable, then no further action will be needed. But if the process is not capable, we will need to take
appropriate corrective action to further improve the process.

In short we are identifying the specifications and some best practices for doing so. This involves establishing the
tolerances based on the requirements set forth by the customers and also what’s happening in the market. There
might be other requirements that are set forth by industry standards or the regulatory requirements. In addition,
the organization might have own organizational requirements that we need to meet. In addition, specifications
need to be an ideal measure. They need to be something that’s important to the customer so they relate back to
the customer’s expectations. It’s important though that the specification be realistic. If the specifications are too
tight or unrealistic, many potentially acceptable outputs could be rejected. We’ll want to make sure that we’re
setting up the organization for success based on what the customers’ expectations are.

Verifying Stability of Process


We shall now focus on determining the stability of a process. Here stability is an important assumption before we
start collecting data and calculating the process capability such that the process needs to be stable and fit a
normal distribution over time. We say that an unstable process would have points that are outside of the control
limit which means that there is no consistent mean and variation over time. A sample control chart with unstable
process displays in the chart two points of the process lie outside the control limits. A process is therefore said
to be stable such that the values fall between the upper and lower control limits, which means, again, consistent
mean and consistent variation over time. Therefore in order to determine stability, we need to check the stability
of the data using a control chart. A control chart is a more advanced run chart that uses statistically determined
upper and lower control limits. There are different types of control charts. A control chart takes the target, which
is the mean of the process, the center line.

The upper and lower control limits are the mean plus or minus three standard deviations. This provides a voice of
the process to understand if we have that consistency over time. It’s important to understand that the control
limits are based off the process itself, the voice of the process. The control limits are based off how much
variation there is within the process in relation to the center line. When we have data points outside of the control
limits, those indicate that the process is not in control.

The process needs to be in control before we can calculate the process capability. Let us investigate the four
different process conditions for a process to be unstable – trends, cycle, spike, and shift. When we think about
the normal variation within a process, we expect some variation within the processes. But we would not expect
an ever increasing upward or downward trend such that it is expected that the data is somewhat cyclical. We
would have several data points, maybe going up and then one data point coming down. But to have multiple data
points in a row, trending upward or downward, would be a sign that something is happening within the process. In
addition, if we have spikes in the process, that’s an indicator that something is happening within the process
causing it to be unstable. We wouldn’t want to always have an upward or downward trend. We would not expect
the process to cycle up and down, up and down, and up and down. We would expect some data points in a row to
be moving in the same direction. Finally, there are times where we might see a shift in the data indicating that
something is happening. Oft time, that’s a signal that something has changed within the process.

One of the most commonly used control charts is the X bar and R chart.

•X Bar Chart: The X bar chart shows the average. It’s used when we’re collecting subgroup samples at some time interval
of two or nine subgroup samples and then the average of those subgroup samples are plotted.
•R Bar Chart: The R bar chart represents the range of the subgroup of those samples that are collected at each interval.
This provides information on the overall average of the process and how the average performance is running. Also, this
indicates how the range of the spread of the data is occurring over the subgroup sample sizes.
Verifying Normality Assumption
We shall now discuss how to verify the normality assumption, for this it is important to make sure that we check
for the assumption of normality, that the process follows a normal distribution. But if the process is not normal,
then the capability study can produce undesirable results. For instance, we could have misinterpretation of the
control charts or we could misinterpret special cause patterns on the control charts. We could also have
misleading assessment of the process capability due to incorrect statistical results for the yields in the CP, CPK,
and Sigma levels. We also could have an incorrect assessment of the alpha and beta risks that are associated
with the confidence intervals, except in sampling plans and hypothesis tests. Lastly, we could also have
misidentification of process parameters as important or unimportant for the desired results. Normal distribution
has some special characteristics.

•Normal distribution has a bell-shaped curve, and the curve has a single peak at the very top.
•Normal distribution is also called unimodal, because it has that single peak and the data is centered on the mean.
•Normal distribution can be described by two parameters, mu (µ) for the mean and sigma (σ) for variation.
•The distribution is symmetric on both sides around the mean, meaning that 50% of the values are less than the mean and
50% of the values are greater than the mean.
•Approximately two-thirds of the data, 68% of the data, falls between plus or minus one standard deviation on either side
of the mean. Also, the mean lies at the center of all the distribution and the two tails of the distribution extend into infinity
and do not touch on the horizontal axis.

Three key methods for assessing normality are (in general done by visual inspection) –

•Histogram: First method is the histogram where we look at the distribution of the data on the histogram to see if it
follows a normal distribution.
•Normal Probability Plot: The second visual method is the normal probability plot, where we compare the data points to
see if they follow a linear path with the line.
•Fat Pencil Test: The third method is a fat pencil test similar to the normal probability plot. In this method we place the
pencil over the dots on the normal probability plot. If they cover the dots it means that it follows a normal distribution.

For checking the normality we can also compare the data with characteristics of the normal curve. Some of the
methods to check the normality are –

•Capture the mean, the median, and the mode. If the mean, median, and mode of the data are equal to each other, then the
data is likely normal.
•Divide the inter quartile range by the population standard deviation. This is done by calculating the inter quartile range or
IQR for the data and then dividing it by the population standard deviation, or if the sample is large, by the sample
standard deviation. The inter quartile range is the distance between the 25th and 75th percentile. If the result is
approximately 1.33, then the data is likely normal.
•Compare the range to the normal distribution. Approximately 68.27% of the data should fall within plus or minus 1
standard deviation. In addition, 95.45% should fall within plus or minus 2 standard deviations from the mean. And also,
99.73% should fall within plus or minus 3 standard deviations of the mean.
•Goodness of fit hypothesis test is another method for checking normality, and the most common type is the Anderson-
Darling test. The Anderson-Darling test is especially effective in detecting the departures from normality that occur in the
tails of the distribution. This hypothesis testing is normally done with statistical software, because it uses more advanced
calculations, such as p-value statistics, which represent the probability that we might be wrong to reject the assumption of
normality about the data set. So we would compare the p-value with a predetermined risk level, or the alpha value, which
is typically set at 0.05, to determine whether or not we reject the assumption of normality.

Calculating Cp
After validating the assumptions of process stability and normality, we can calculate and use process capability
indices, such as Cp and Cpk. This includes the process spread and process tolerance/ specification limits. These
indexes will help to assess and compare the process limits with the specification limits. Cp is the abbreviation for
process capability index, whereas Cpk refers to long term process capability index. Both Cp and Cpk are used to
measure process spread in ratio to the process tolerance or specifications.

The process capability index of Cp is calculated by determining the specification width by subtracting the lower
specification limit from the upper specification limit, and then dividing by 6 times the standard deviation. The
sigma value, or standard deviation, is an estimate of the population standard deviation that’s calculated from a
sample of processed data. It is very essential to know that with Cp, we are looking at the specification limits
divided by the process width. At this point, the calculation does not take into account the centering of the
process. We are simply taking a ratio of the specification limits and the process width. While there is no standard
for a good Cp value, when we have a Cp value of 1.33 or greater, this value signifies that the process comfortably
meets the specification limits.

Most organizations require a Cp value of 1.33 or above. This corresponds to 0.0064% or 64 parts per million out
of the specification. In order to have a Six Sigma level process, we need to have a Cp value of 2. When we have a
Cp value between 1.0 and 1.33, means that the process is capable with tight control. When a process has a
spread that’s about equal to the specification width, which means the Cp value is close to 1, any slight movement
off the mean off center means that significant portions of the process may drift outside of the specification limits
with the tails of the distributions. Therefore, we need to closely monitor the process. A process with a Cp value of
less than 1 is not capable of producing a product or service that meets the customer specifications. That means
that the process width is greater than the specification width. It is also important to note that Cp does not take
into account process centering. We’re looking at a ratio of the specification limits and the process width, and
we’re not taking into account the mean of the process. Therefore if we have an uncentered process, we could still
have a high Cp value because we might have a narrow distribution with little variation in the process. Since Cp
doesn’t consider the center of the process, even though the Cp value might indicate that a process is capable, we
might be failing to meet the customer specifications.

Calculating Cpk
Cpk as a performance measure considers the mean of the process within the calculations. Therefore, it takes into
account the centering of the process. Cpk is mainly calculated by taking the minimum of two values. i.e., upper
specification limit minus the mean, divided by three standard deviations Or the mean minus the lower
specification limit, divided by three standard deviations.

Note that when while calculating Cpk, we’re using the minimum value between the mean and lower specification
limit and the mean and the upper specification limit. A bell-shape curve depicting the LSL, mean, and USL is
displayed, such that the mean lies in the center of the bell curve with three standard deviation on each side.
Cpk focuses on the point at which the specification is closest to the process mean. Cpk measures the distance
from the mean to the closest specification limit, and thereby calculating it as a proportion of the spread. It helps
in producing a more accurate picture of the process capability. With reference to the diagram we are dividing 6
standard deviation into three standard deviations on each half of the calculation. This basically accounts for half
of the total six standard deviations that we previously used in determining the Cp calculation.

Illustration: Let us suppose you work for a manufacturing firm for an electronic grinder and you are required to
evaluate the performance of the company’s products with regards to the power output.

Given – Horsepower specification = 1.25 plus or minus 0.10.


Mean for the current process = 1.255.
Standard deviation from the data = 0.025.
Upper Specification Limit = 1.15
Lower specification limits = 1.35

Solution –

This result allows us to focus on the point at which the specification is closest to the overall average. The
knowledge of the Cpk, helps to focus on centering the data to pinpoint the process capability issues and reduce
the variance. In general, a Cpk of 1.00 has been considered capable. That means the process limits must lie
within the specification limits. Since, the quality requirements have really tightened up so many organizations
require a Cpk value of 1.33, 1.67, or even 2, based on the customer requirements. Now as the Cpk value
increases, the process variation is reduced.

Now in the given illustration even though the process variation has been reduced. Since the process mean is not
at the center of the process, and Cpk value is only 1.2 since we are considering into centering now. Also the mean
of the process is closer to the lower specification limit.

Application of Cp and Cpk


We will now discuss the application of Cp and Cpk, and what we do with these concepts if the process is not
capable of meeting the specification limits. At first we must determine why the specification limits are not met.
We would now explore a few scenarios to see how this applies.

•First case, we could have a process on target, meaning that the process mean overlaps with the target value. Such that the
process is within specification limits.
•Second case, we could also have a process that’s on target. So the mean and the target are equal. However, we have too
much variation such that we are running outside of the specification limits.
•Third case, we could also have a process where we are within specification limits. Though, we ate off target i.e., the
target and the mean are not equal such that we are running closer to the upper specification limits.
•Finally, we could have a process which is off target. Such that the mean and the target are not equal. In this case we are
also running outside of the specification limits, because the distribution of the process is outside of the specification
limits.

Different way to compare the Cp and the Cpk value


•If we take a process that’s centered with little variation, that means we have a high Cp and a high Cpk.
•If we take a process that has a narrow distribution but it is not centered, that means we have a high Cp and a low Cpk.
•If we have a centered process, however, with too much variation. Therefore, we could have a low Cp and a low Cpk.

Now, Cpk takes into account centering when we are subtracting using the mean of the process and we are taking
into account that we’re centering it by dividing by three standard deviations on either half of the equation. Due to
such calculations, the Cp should always be greater than or equal to the Cpk. Now, if the Cp and Cpk values are
equal to each other, that means the process is centered. Since the goal is to reach a six sigma level of quality,
therefore the Cp has to be greater than or equal to 2. The Cpk has to be greater than or equal to 1 and the defects
per million opportunities has to be less than or equal to 3.4. Note, that Cp and Cpk values are ratios, so they have
no units. Consequently, they can be used for comparing capabilities of different processes in different
departments across an organization.

Calculating Pp
We shall now discuss further about the capability indices of Pp and Ppk such that they are different from Cp and
Cpk. Note that, Cp and Cpk focus on within-group variation, where on the other hand, Pp and Ppk take into
account overall variation. Since Pp and Ppk include all kinds of variation, they ate more reliable indices for
assessing the long-term process performance, as well as the actual process performance. Now the only
mathematical difference between calculating the two sets is how the sigma value is estimated. With Cp and Cpk,
we are using the group standard deviations. However, with Pp and Ppk, we are using an overall estimate of the
standard deviation because the Pp and Ppk take into account the overall sigma. This includes both within and
between sub-grouping variations such that all the samples are pooled together. Pp, or Process performance
index, is calculated the same way as Cp except for the value of the standard deviation. This reflects the long term
sigma, rather than the short term sigma.

For calculating Pp, we start by subtracting the lower specification limit from the upper specification limit. Then
we divide the difference by six times the long term standard deviation. To determine the long term sigma, we
calculate the population standard deviation using the same formula. We take the square root of the sum of the
squared differences of the observed values from the mean. And we divide this by the number of samples minus
1. This value is then multiplied by 6 to produce the denominator for the Pp calculation.

Calculating Ppk
We shall now calculate the value of Ppk somewhat similar to Cp. Pp does not take centering into account
however, the process performance capability of Cp and now Ppk, does take centering into account. The formula
for Ppk looks very similar to the formula for Cpk. However, in the denominator we use the long term standard
deviation to get a more accurate depiction of how the process is operating.

Note, the higher the value for the Pp and Ppk, the more stable the process is but if the Pp value equals the Ppk
value, then the process is centered. However, if Pp is greater than the Ppk then the process is off center. In order
to be at a level of Six Sigma, the Ppk value has to be greater than or equal to 1.5.
Capability Ratio and Cpm
We shall now study in detail the capability ratio and Cpm. The capability ratio is the inverse of the Cp as it is
calculated by multiplying 6 times the standard deviation and then dividing that value by the upper specification
limit, minus the lower specification limit.

Thumb rule to interpret the capability ratio.

•If the capability ratio is less than 0.75, then the process is capable.
•If the capability ratio is between 0.75 and 1.0. That means that the process is capable with tight control.
•If the capability ratio is greater than 1.0, means the process is not capable.

The next capability index, Cpm, and understand how it relates to the Taguchi loss function. Primarily, the Taguchi
loss function demonstrates why we should strive to continuously improve the conformance to the target as
opposed to conformance to specification. In effect, Taguchi states that as we move away from the target of the
process, there’s a loss from the process and there’s a loss to society. What we want with the process is to provide
on-target, consistent performance to the customers. Since this measure is more sensitive to the target, when the
process is not on target and the target is not in the middle of the specification limits. The Cpm would be less than
the Cpk which is already less than the Cp. We use the ‘Cpm index’ to assess the ability of a process to center
around a target, rather than around the process average.

The calculation for Cpm is somewhat similar to Cp and Cpk. The big difference with this equation is in the
denominator. We take the mean minus the target value. We square that difference and we add the process
standard deviation value, which is squared, or it’s the variance. Then we take the square root of that summation
and multiply it times 6 in the denominator.

Process Performance and Sigma Level


Clearly the process Sigma level and the capability indices are linked to each other. The Sigma level as a metric
focuses on studying the process performance. Basically, we are trying to understand with the help of
performance is how many standard deviations we can fit between the mean of the process and the closest
specification limit. The Sigma level can be determined from the defect and capability indices using various data
tables, using manual calculations, or using software such as Minitab. It’s important to understand the relationship
between the yield of the process and the Sigma values, and how to use the Z-distribution table to find the
corresponding Sigma value.
Let’s explore an example to illustrate how the yield and Sigma level are related to each other. Suppose we have a
yield of 90.49%, by subtracting 90.49% from 1, we get the probability of a defect of 0.951. We find this value on
the Z-distribution table, and by taking it over to the Z column, we can see that the Z value relates to 1.3. Then by
taking it up, it relates to 0.01. That tells us that we have a Z value of 1.31, which also means that we have a Sigma
value of 1.31.

Process Improvements Recommendation


We shall now discuss what could be done based on the process capability indices and the sigma levels. There
are a couple of different options when we decide what we want to do, or what we need to do. We could do
nothing, change the specifications, center the process, reduce variability, or accept the losses. Let us now
discuss each of these options in more detail –

•First case, when we could do nothing. Therefore if we do nothing, then we will have indices within the range and the
limits. Therefore we will be making product outside of the specification limits. Also we could also change the
specifications though the specification limits depend on the customer. These specification limits are set by the customer,
so we might be able to negotiate with the customer on how we can loosen up the specification limits to allow for more
variation within the process. However, it is up to the customer to make that decision.
•Second case where we will center the process. The main objective is to do is get the mean equal to the target, so that the
indices are within the range or limits.

Let us illustrate this further by taking an example; we are a manufacturer conducting a capability study on glass
that’s used for the welder’s goggles. The glass might have the right amount of tint to protect the welder’s eyes. If
there’s too much tint and the welders can’t see what they’re doing, they may burn themselves. If we have too little
tint, then the welder’s eyes could get damaged. In this case, centering on the target of the right amount of tint is
really critical. Using data that has been gathered from 100 samples, we can calculate the indices. A Cp value of
1.5 would indicate the process is adequate as long as it’s centered on target. However, the manufacturer’s Cpk is
1.7 and the Cpm is 0.883. That tells we that the process is not centered on the target.

In this case as a manufacturer, glass is produced within specification limits, but it is consistently below the
target. The glass allows a little more light in rather than being a bit darker. In this situation, we would recommend
centering the process to bring it back in line with the target and the specification limits. One more decision we
can take is to reduce variability. However, this would typically be a very most costly scenario because often times,
we don’t know what’s causing the variability within the processes. Clearly it is not a simple fix that might involve
changing some settings on a piece of equipment. This probably indicates going through a Six Sigma project to
understand what is causing the variation within the process. Then using tools, such as experimental designs or
long term studies to truly understand what’s happening within the process. The last option is to accept the
losses. This would be opting to do nothing. This option is typically used when the problem is just too expensive to
fix. So rather than fixing it, we decide that we’re going to accept the losses and the costs that go along with this.
However, before making this type of decision, we would need to go through and doing thorough cost benefit
analysis to make sure it makes sense to go ahead and accept those losses.

Short-term vs. Long-term Capability


It is very essential to understand the difference between short-term and long-term process capability as well as
the impact on how we are approximating long-term process capability when short-term data is being used.

Short-term data is generally used to calculate the process capability. We collect data over a period of hours, days,
and maybe weeks. But we use that short-term data to make inferences about the long-term process capability.
However, over time, variation tends to creep in. It may come from things like wear on the machines, change in
operators, change in raw materials, or environmental changes. These factors can cause the process to shift from
the short-term sigma to the long-term sigma, as was first noticed by the teams at Motorola, and they came up
with a way to quantify it. This is known as long term dynamic mean variation. The variation or a shift in the typical
mean variation is between 1.4 and 1.6 sigma, referred as a 1.5 sigma shift over time. Now to calculate the long-
term capability, we subtract 1.5 from the short-term sigma and the 3.4 defects per million opportunities actually
correspond to 4.5 sigma in the long-term.

There are conversion tables that actually take into account the process sigma short-term, and the process sigma
long-term. They take into account that 1.5 sigma shift and are built in. So, true Six Sigma capability would actually
translate to about 2 defects per billion opportunities. It’s the 3.4 defects per million opportunities that really
corresponds to a long term process sigma of 4.5, because we have that 1.5 sigma shift built in. Usually, when a
process is reported as reaching Six Sigma levels, it’s 6 sigma in the short term and 4.5 sigma in the long term.
This means for a process to have 3 defects per million opportunities over long term must be more capable than
4.5 sigma in order to accommodate such instability or the natural process shift.

We majorly use the hypothesis testing within Six Sigma in the Analyze phase. We use the Hypothesis Testing at
this stage in our DMAIC methodology in order to test if there have been differences within the means or the
variances. Also we are trying to understand; if there have been changes in the processes, such that it made a
difference in our overall outcome. Some of the examples or questions that we could be answered by hypothesis
testing are,

•If one courier company delivers more consistently than another company.
•If two molding machines are producing the same proportion of defective items.

Here we are looking to see if there are differences in terms of values change or if they are equal to same value
over time.

We can use it to test if our variances, or proportions, or our values have changed over time. The following are the
key steps when we talk about hypothesis testing.

•First step involves setting up a null hypothesis and our alternative hypothesis, and these values should be mutually
exclusive; meaning that there should not be any overlap between how we set these up.
•Second step is to figure out the test considerations. In our test considerations, we’re determining what our alpha value is
and what our degrees of freedom is for our calculations.
•Third step is to determine our test statistic. And our test statistic is calculated based on the type of hypothesis tests that
we’re using.
•Fourth step, we compare our critical value to our p-value, and based on the results of this comparison, then we can
interpret the results.

Hypothesis Test for Paired-comparison


It is essential to understand that the paired-comparison t-test is very similar to a two-sample t-test for means.
The only point of difference is that the data is paired rather than being independent. In a paired-comparison t-test,
we are looking at data that has been paired through the same set of subjects. But the datasets are really
connected to each other and such that they are not independent, they are dependent because there’s a
corresponding dataset for each data. We could use a paired-comparison t-test while evaluating before and after
effects, assuming that these are coming from the same subjects.

A paired-comparison t-test can be used when we are looking at quarterly test results for different classes, but we
are looking at it from the same principle, and so that would give us connection between the datasets. Another
example would be if we are looking at the same task that’s being performed manually and automatically, because
we are looking at the same task.

We shall now discuss how a paired t-test compares to a two-sample t-test through an example – if we have a
group of patients that get a new medication to reduce their blood pressure – the same sets of patients are tested
before and after. Now the two datasets would be dependent because they are connected through the same
patients, and therefore we would use the paired t-test. However, we could also look at this from a two-sample t-
test. For instance let us consider if we have a group of patients with new medication that are compared with
another group of patients who do not take the new meds. And in this case, the two samples are independent and
they’re not paired because we have one group that takes a medication and one group that does not take the
medication. And in this case we would use a two-sample t-test because they’re independent and not paired.

Key characteristics of paired-comparison test


•It works really well when we are doing a before and after comparison, since we are looking at that from the perspective
of that same person, or that same machine, and things like that.
•Paired-comparison tests also work very well when the data is organized in pairs. It’s important to note that when we are
look at it in terms of pairs, there would be that dependent aspect of it as well, that linkage or that connection that these
values – these data points – would be connected since we are looking at paired datasets, the datasets should be the same
size. And essentially what we’re doing then is we’re testing the differences between pairs.
•Paired-comparison test can be used when we are looking at the number of defects produced before and after installing a
new machine.
•Paired-comparison test can also be used when we are comparing the responses of an interview from before and after a
change is made, or we can measure the number of complaints received before and after a training program has been
implemented.

Conducting a Paired-comparison T-test


We will now focus on understanding the process of conducting a paired-comparison t-test. This will be done by
carrying out a paired-comparison t-test. Let us take an example we have 10 high schools that we are comparing
within a school district. They launched an early intervention program to tackle the issue of increased drop-out
rates and they monitor the effect at that program over two academic years. We are now going to look at a
comparison between the drop-out rate before and after the intervention. Using the hypothesis test, we would
want to test to see if the program helped in bringing the drop-out rates down from these schools. So looking at
those values, the first steps of the process is to calculate the differences between the before and after values.

The next move in the process is to set up the two hypotheses and the various test parameters. We would want to
observe if there’s a difference, for which we are going to set up our null hypothesis that the two population means
are equal. This means that by saying that the two population means are equal, we are also saying that the
difference in the means is equal to zero. Therefore our the mean difference between these values, the before and
after, is not equal to zero. So for the above example, the schools could set up an alpha value of 0.05. For our test
considerations this would be a two-tailed test, because the differences could be that it increased or reduced the
drop-out rates. Since it’s a two-tailed test, we are using alpha over 2, or 0.025 for each tail also our degrees of
freedom would be set at 9, because it’s n minus 1, and we had 10 high schools that we are using and using this
information, we would have a critical value of plus or minus 2.26.

Therefore by using the data table and the differences column, we can calculate the test statistic. In this case, we
are calculating our test statistic or t calculation. Now it’s important to note that as a Six Sigma professional, we
need to know what the paired-comparison t-test is, but this is more for demonstration purposes because
calculating the test statistic using this formula is outside of the Green Belt training. It is recommended that
students would use an online calculator or statistical software such as Minitab, to actually do this calculation. In
the above illustration, we would end up with a test statistic of 2.5 and then using that test statistic we would
compare it to our critical value. Now by using the critical value method, we would compare our test statistic value
of 2.5, which is greater than our critical value of 2.26.

Using the graphical representation we can compare our test statistic of 2.5, and it falls within our rejection region.
Thus based on the critical value method, we would reject our null hypothesis. We could also use the p-value
method and compare our p-value of 0.0339, which is less than 0.05. And therefore based on the p-value, we
would also reject the hypothesis that there’s no differences between the before and after student drop-out rates.
In other words, there’s sufficient evidence to support the alternative hypothesis of difference. We can conclude,
based on this analysis, that the intervention has actually reduced the drop-out rate at the schools.

Minitab Output for the Test


We will now understand how Minitab could assist in more detailed analysis. In Minitab there is a plot known as
Individual Value Plot of Differences, such that within the plot the graph can be used to examine the individual
values in each sample to assess the sample distributions.

The Paired-comparison t-tests are considered helpful in observing if once an improvement has been executed,
whether it made a difference on the output. We shall now consider a Lean Six Sigma team at a tax agency to
measure the effect of a new process on the case processing time, for a certain category of assesses. For this the
team collects processing samples for 20 different appraisers such that they have collected this information from
the before processing time and after processing time to show the difference before and after a pilot
implementation of a new process. In this case the team wants to determine if the new process has actually
reduced the processing time, before they spend the time and effort going into the full implementation of the new
process. In order do this the team has collected the data, that will use Minitab to perform the paired-comparison
t-test. For this we would select Stat and then Basic Statistics, and then select Paired t-test.

In this case we need to make sure that we have selected that each sample is in its own column and that that
Sample 1 is our C2 (before processing time) and Sample 2 is column C3 (after processing time) and then select
OK. Consequently we would be able to look at the results of our paired t-test. Minitab then gives us information
on the mean, standard deviation, and standard error of the mean, of the before and after processing time, and the
difference. We are also able to get the 95% confidence interval for the mean difference and then the t-test of the
mean difference. This gives a p-value of 0.016. Since our p-value is less than 0.05, we reject our null hypothesis
that the mean difference is zero. We can state that there is a difference there has been a change in our average
processing time, based on the pilot implementation.

One-sample Test for Variance


After hypothesis test for means, we shall now look at hypothesis tests for variance. This is important since within Six Sigma
we are trying to have our on-target performance, and this is where our test for means comes into play, since we would look to
see if our mean and our target values were similar. Now with our test for variances, we are making changes to the process
such that we would want to reduce our variation within our process, where we have a narrower distribution. Therefore for
test for variances we would be looking at the standard deviation and variance – our sigma and our sigma squared. For this we
consider two different types of tests – one-sample test and the two-sample test. The one-sample test is used to compare the
variance in a single population with a specified value. But in case we want to determine which of two samples exhibits the
largest variation or variance, then we could use a two-sample test for variation – for variance, this is where the population
samples would come from two different processes, or from the same process at two different times. For instance, if we had a
before and an after improvement initiative, then we would use our two-sample test for variance.
We might use the test for variance with the tire manufacturer developing a new type of tire with the goal to provide a more
consistent overall wear rate. Such that engineers might want to determine if the variation of wear in the new batch of tires is
smaller than the variation in the current tire being produced. So when we consider the one-sample variance, for our one-
sample variance we would use the chi-square distribution, as we go through out hypothesis test. Where with our chi-square
distribution, it is not normally distributed but skewed to the right. One way to remember that is that the skew of the tail is off
to the right of the distribution. So let’s take a look at an example of how we would determine our hypotheses and our test
considerations. So we would start by setting up our null and our alternative hypothesis.

Two-sample Test for Variance

F-test
When we are using the two-sample test for variance, we are trying to pull information from two samples. These could be two
samples from different populations, if we are trying to see if there is a difference in the variance between these two samples
or this could be that we have got the same process and we want to do a before and after comparison. Thus for the two-sample
test for variances, we are going to use the F-distribution. Let now look at how we would set up our hypotheses and our test
consideration, using the two-sample test for variance. So the first step would be to set up our hypothesis, our null and our
alternative hypothesis. So let’s take a look at how this would apply with an example. Suppose we’re part of a Six Sigma team
in a paper factory and we’re trying to compare the output of two machines, and both machines were built to extract water
from pulp. So if we want to see if there’s a difference between the variation, and variance, from both samples, we would set
up our null hypothesis such that the variance for machine 1 is less than or equal to the variance for machine 2. And then our
alternative hypothesis would be that the variance for machine 1 is greater than the variance for machine 2.
We would want to know this with a 95% confidence level; therefore our alpha would be 0.05. We would set this up as a one-
tailed test; we want to see the difference if it’s to the right, if the tail of the distribution is to the right. And then next we
would use our degrees of freedom. In this illustration, we would pull 25 samples from both pieces of equipment. So the
degrees of freedom for each piece of equipment would be 25 minus 1, or 24. And so our degrees of freedom for both
machines would be 24. Such that our assumptions would be that the data is continuous, independent, randomly sampled, and
normally distributed. The next step in the process would then be to calculate our test statistic. And for our test statistic, since
we’re using the F-distribution, our calculation would be where our numerator would be the larger of the sample variances,
divided by the smaller of the sample variances. And so in this case, we have 6.7 divided by 2.5, and that gives us our test
statistic of 2.68.
We can thereby use the information to interpret the results. The first step in that would be to find our critical value using our
F-distribution and based on our degrees of our freedom, since there’s 25 samples for each, our degrees of freedom would be
24 for machine 1 and machine 2. And that gives us a critical value of 1.98. And since a test statistic of 2.68 is greater than
our critical value of 1.98, we would reject the null hypothesis. We could also compare our p-value to our alpha value. In this
instance our p-value is really low compared to the significance level, and therefore we would have the same conclusion. So
in this case we would reject the null hypothesis. And then using that information we could state that the variability of
machine 1 is greater than the variability of machine 2. Another important hypothesis test is looking at the two variances to
see if we would reduce the variation within our process. To illustrate further let us consider an illustration of how we would
use statistical software such as Minitab, to do a two-variance hypothesis test.
Let us suppose we are a part of an operations team in a cereal manufacturer and we are evaluating two packaging machines.
So we have Machine 1 and Machine 2. And for each of those we have collected 20 data points. These machines make 200
gram packages of a type of cereal, and the team wants to determine whether the variability for one machine is greater than
the variability for the other machine. With this information the team could then recommend which machine has the least
amount of variability, for management to purchase. In order to do this we would want to use a two-variance hypothesis test.
Now, to do this in Minitab, we would select Stat and then Basic Statistics, and then 2 Variances. It is important to make sure
that we select the right option. Now, each machine in this example is in its own column, so we would select that option. Then
we have two samples, the first one is Machine 1 and the second sample is Machine 2. And then we could also select
graphical options. In this example we would use the Summary plot to get a graphical summary of the result. Then we would
select the OK option and then we would select OK again. Using this information, we get our graphical results and then we
also get our statistical results.
Now we start by considering the results in our session. For our test and confidence intervals two variances, compare Machine
1 and Machine 2, Minitab summarizes the method. The null hypothesis for this test is that the standard deviation of Machine
1 divided by the standard deviation of Machine 2 is equal to one. Such that our alternative hypothesis would be that the
standard deviation of Machine 1 divided by the standard deviation of Machine 2 is not equal to one. We would set this up
such that our alpha is 0.05. Then we also have information on our statistics from our Machine 1 and Machine 2, including
standard deviation, variance, and our 95% confidence for our standard deviations. Then we also have the 95% confidence
intervals using two different methods – Bonett’s method and Levene’s method. Then using these two tests, we have our p-
values. For the Bonett method and Levene it is 0.152 and 0.169 respectively. Since the values are greater than 0.05 – our
alpha value – we state that the standard deviations of our two machines are equal.
This would also be interpreted in our graphical output such that the graphical output gives us 95% confidence interval for our
standard deviations. The 95% confidence interval for our standard deviations again for Machine 1 and Machine 2, before it
was Bonett method and Levene’s method. Then with the box plot of Machine 1 and Machine 2 and using p-values again, we
can state that the standard deviations, or the variances of our two machines, are the same.
We could use this job aid to find critical values based on the F-distribution and an alpha value of 0.05.
F-distribution table

df2/d
1 2 3 4 5 9 10 15 20 24 30
f1

161.4476 199.5 215.7073 224.5832 230.1619 240.5433 241.8817 245.9499 248.0131 249.0518 250.0
1

18.5128 19 19.1643 19.2468 19.2964 19.3848 19.3959 19.4291 19.4458 19.4541 19.46
2

10.128 9.5521 9.2766 9.1172 9.0135 8.8123 8.7855 8.7029 8.6602 8.6385 8.616
3

7.7086 6.9443 6.5914 6.3882 6.2561 5.9988 5.9644 5.8578 5.8025 5.7744 5.745
4

6.6079 5.7861 5.4095 5.1922 5.0503 4.7725 4.7351 4.6188 4.5581 4.5272 4.495
5

5.9874 5.1433 4.7571 4.5337 4.3874 4.099 4.06 3.9381 3.8742 3.8415 3.808
6

5.5914 4.7374 4.3468 4.1203 3.9715 3.6767 3.6365 3.5107 3.4445 3.4105 3.375
7

5.3177 4.459 4.0662 3.8379 3.6875 3.3881 3.3472 3.2184 3.1503 3.1152 3.079
8

5.1174 4.2565 3.8625 3.6331 3.4817 3.1789 3.1373 3.0061 2.9365 2.9005 2.863
9

4.9646 4.1028 3.7083 3.478 3.3258 3.0204 2.9782 2.845 2.774 2.7372 2.699
10

4.8443 3.9823 3.5874 3.3567 3.2039 2.8962 2.8536 2.7186 2.6464 2.609 2.570
11

4.7472 3.8853 3.4903 3.2592 3.1059 2.7964 2.7534 2.6169 2.5436 2.5055 2.466
12

4.6672 3.8056 3.4105 3.1791 3.0254 2.7144 2.671 2.5331 2.4589 2.4202 2.380
13

4.6001 3.7389 3.3439 3.1122 2.9582 2.6458 2.6022 2.463 2.3879 2.3487 2.308
14

4.5431 3.6823 3.2874 3.0556 2.9013 2.5876 2.5437 2.4034 2.3275 2.2878 2.246
15

4.494 3.6337 3.2389 3.0069 2.8524 2.5377 2.4935 2.3522 2.2756 2.2354 2.193
16

4.4513 3.5915 3.1968 2.9647 2.81 2.4943 2.4499 2.3077 2.2304 2.1898 2.147
17

4.4139 3.5546 3.1599 2.9277 2.7729 2.4563 2.4117 2.2686 2.1906 2.1497 2.107
18

4.3807 3.5219 3.1274 2.8951 2.7401 2.4227 2.3779 2.2341 2.1555 2.1141 2.071
19

4.3512 3.4928 3.0984 2.8661 2.7109 2.3928 2.3479 2.2033 2.1242 2.0825 2.039
20

4.3248 3.4668 3.0725 2.8401 2.6848 2.366 2.321 2.1757 2.096 2.054 2.010
21

4.3009 3.4434 3.0491 2.8167 2.6613 2.3419 2.2967 2.1508 2.0707 2.0283 1.984
22

4.2793 3.4221 3.028 2.7955 2.64 2.3201 2.2747 2.1282 2.0476 2.005 1.960
23

4.2597 3.4028 3.0088 2.7763 2.6207 2.3002 2.2547 2.1077 2.0267 1.9838 1.939
24

4.2417 3.3852 2.9912 2.7587 2.603 2.2821 2.2365 2.0889 2.0075 1.9643 1.919
25
F-distribution table

df2/d
1 2 3 4 5 9 10 15 20 24 30
f1

4.2252 3.369 2.9752 2.7426 2.5868 2.2655 2.2197 2.0716 1.9898 1.9464 1.901
26

4.21 3.3541 2.9604 2.7278 2.5719 2.2501 2.2043 2.0558 1.9736 1.9299 1.884
27

4.196 3.3404 2.9467 2.7141 2.5581 2.236 2.19 2.0411 1.9586 1.9147 1.868
28

4.183 3.3277 2.934 2.7014 2.5454 2.2229 2.1768 2.0275 1.9446 1.9005 1.854
29

4.1709 3.3158 2.9223 2.6896 2.5336 2.2107 2.1646 2.0148 1.9317 1.8874 1.840
30

4.0847 3.2317 2.8387 2.606 2.4495 2.124 2.0772 1.9245 1.8389 1.7929 1.744
40

4.0012 3.1504 2.7581 2.5252 2.3683 2.0401 1.9926 1.8364 1.748 1.7001 1.649
60

3.9201 3.0718 2.6802 2.4472 2.2899 1.9588 1.9105 1.7505 1.6587 1.6084 1.554
120

3.8415 2.9957 2.6049 2.3719 2.2141 1.8799 1.8307 1.6664 1.5705 1.5173 1.459

Remember that,
• df1 (the horizontal column header) is the numerator degrees of freedom
• df2 (the vertical row header) is the denominator degrees of freedom

Characteristics of Tests for Proportions


Since we are not always dealing with continuous data, it is very essential as a Six Sigma professional to
understand hypothesis testing when we are considering proportions. We would be using the hypothesis test for
proportions when we are testing ratios within a sample. There are indeed several key assumptions that we have
to make when we are using the test for proportions –

•The data should be binary such that this would be either yes/no, pass/fail type information, and this is commonly used
when we are talking about defect rates.
•We also need to assume that our samples are random, our trials for this test are independent, and then the proportions of
interest are constant over time such that they are not changing over time.
•In the hypothesis test for proportion, there are two different types of tests. The first is a one-sample test for proportions.
Such that this would be useful if we are looking at the defect rate within the organization and we want to see if the
proportion from defects from one manufacturing line, is on par with the rest of the organization. The second type is the
two-sample tests for proportions used for comparing proportions from two different populations. For instance, we could
compare machine 1 against machine 2 to analyze if the defect rates are the same. Since we are dealing with proportions,
it’s crucial to make sure that we have the right sample size with sufficient sample size to ensure that we are able to
perform accurate calculations. The equation, used for calculating sample sizes, we are looking at the minimum of
different factors.

One-sample Proportion Tests


We should now calculate and perform a one-sample proportion test using an illustration. We consider an example
to look at a loan processing time such that within this loan processing operation, the director of the consumer
loans department claims that over 75% of the applications for loans are processed within the accepted
timeframe of nine business days. However, the process improvement team within the bank thinks that this value
is actually much less than the 75%. So to test this hypothesis with a 95%, the team goes through and records
samples, and collects information on 25 different applications. As per the given situation the team looks at the
data and they test the assumption that the data is normally distributed, random, independent, and has constant
variance. Since all these assumptions are met, they can move forward.

The next step in the process is to ensure that they have sufficient information. So when we consider the sample
size, since they have 25 samples that they have collected and they are testing the hypothesis of 0.75 or 75% of
the applications they received within that time, then this gives us a value of n time’s p-sub-zero, of 18.75. We then
use the n times 1 minus p-sub-zero, which gives us 25 times 0.25 and a value of 6.25. We can compare that to the
minimum value of 5, and since 6.25 is greater than or equal to 5, we have met the condition with our sample size.
And so now the team’s ready to move forward with their hypothesis test, and they set up then their hypotheses,
where the null hypothesis is that the p-value is greater than or equal to the claim value of 75% or 0.75. And the
alternative hypothesis is that the actual p-value is less than the assumed value of 0.75. And based on how we’re
setting up the test this is a one-tailed test.

While we perform a one-sample test, the equation that we will use is our z-test statistic. With this equation, we
use the P prime value, which is a sample proportion, and we are subtracting p-sub-zero, which is our
hypothesized proportion, which is a 0.75. Then our sample size is 25. Now using that information we plug it into
our test statistic calculation and we get a test statistic of negative 1.27. Now that we have that information, we
could interpret our results. When we use the z table, since we are using an alpha value of 0.05, our critical value is
1.645.

We can compare our test statistic of negative 1.27 such that it is less than the critical value, which means it does
not fall in the rejection region, and therefore the test fails to reject the null hypothesis. We can also look at it in
terms of the p-value. Now when we look at our test statistic value of negative 1.27 and we compare that value to
our z-value of negative 1.645, then we’re further validating our test and we can state that we fail to reject our null
hypothesis.

Two-sample Proportion Tests


Let us consider the two-sample test for proportions. And this would be use when we’re comparing two different
populations to see if there’s a difference within those proportions. As we go through this let us take a look at how
we would perform this calculation using an illustration. Let us suppose that we are a part of a Six Sigma team,
and that we are analyzing the call durations of the night and day shifts at an inbound international call center. So
we have information from the day shift and the night shift that we are trying to compare. The goal is then to
determine whether there is a considerable difference between the proportions of escalated calls in the two shifts.
Now for an escalated call, this indicates that the operator has not been able to handle, that’s been escalated up to
that next level or next tier. In order to perform this task, the team is going to take 400 samples from the day shift
and 300 samples from the night shift.

Out of these samples from day shift, 25 were escalated, and 42 were escalated in the night shift. Now the team
needs to understand if we can infer from the samples that, on a whole, the escalations for both shifts are
different. We are going to set up our hypothesis such that our null hypothesis is that the two proportions are
equal, and our alternative hypothesis is that the two proportions are not equal. Given this information, we would
have a two-tailed test. Now let us look at the calculation. We would be using the Z calculation again, and in this
case our calculation is equal to, in our numerator, p1 prime minus p2 prime over p-sub p prime, times 1 minus p-
sub p prime.

The value thus obtained is a square root and is multiplied by the square root of one over n sub-1, plus 1 over n
sub-2. In this calculation, the p-sub p prime is equal to x1 plus x2 over n1 plus n2. Now using this information, we
could interpret our results. So our value of 0.025 is what we would interpret and since our p-value is less than our
alpha value of 0.05, we would be able to compare these two values. We are going to reject our null hypothesis,
because our p-value is less than our alpha value.
This means that the proportion of the escalated cases in the day shift is significantly different than those in the
night shift. Practically, there is something other than just random or mere chance happening in this difference. So
the Six Sigma team needs to go back and look at what is the difference between day shift and night shift. Also
with statistical software we can also easily analyze summary data to do a test for proportions. Let us consider at
an example of how we would perform the test. Suppose we are a part of a Lean Six Sigma team analyzing 140
mortgage cases that were filed by city A, and it is a branch of a national financial institution. It was found that 12
of the filings required extensive re-work before they could be further processed. Now our same team found that
another office of the company that catered to city B had 10 defective filings out of 100 such cases. Now as a
team we would want to determine at a 0.01 level of significance, if city A branch has done poorer quality work
with regard to filing these cases, than city B. Now in this case we just have summary information. But we can use
that summary data using statistical software such as Minitab, to analyze the information. For this we would
select Stat – Basic Statistics, and then 2 Proportions.
It is very essential that while using summary data we are not pulling any information from our columns, or our
worksheet within Minitab. We would need to ensure that we select Summarize data. Our Sample 1 is the
information from city A and with city A we had 12 out of 140 mortgage cases that were defective. Sample 2 is our
city B, and we had 10 out of 100 defective filings. Now since we’re not using the standard 0.05, we need to select
Options. And then under Options we need to change our Confidence level to 99.0%. Then we would select OK and
then OK for the Two-Sample Proportion. Now with that we can see in our session data, Sample 1 had 12 out of
140, and Sample 2 had 10 out of 100. And so that gives us proportions of 0.085 and basically 10%. And then what
we’re doing with our hypothesis is, we’re testing to see if there’s a difference between those two proportions.

Using the information, we get our p-value with the test for the difference. We also get a p-value from using the
Fisher’s exact test. Now in this case, we are going to compare our p-value to our significance level – which was
0.01. Since our p-value is greater than our significance level, we fail to reject our null hypothesis that they are
equal. So we can say that these two proportions are equal.

Basic Concepts of ANOVA


With the help of hypothesis testing, we are looking at comparing the means or the variance between two different
populations, or within one sample. ANOVA commonly referred to as Analysis of Variance is based testing the
differences between two or more means. At our null hypothesis, we perform an ANOVA such that all of the
means are equal. For instance, we could use this if we are looking at an attribute of a material that’s supplied by
four different suppliers, and we want to see if there is a difference between the means amongst those four
different suppliers. Also if there is a difference, we can look to see which one has the best mean. We could also
use this to test different levels of temperature. For instance, we could have three different levels of temperature
and we want to see which one affects the moisture content of a product.

Now in ANOVA there are three key concepts, including – Factor, Levels, and Response. When we consider factors,
these are the different aspects – so if we refer to our product where we had four different suppliers – we could
have our factor that would be our supplier, and then our levels would be our different suppliers. And in this case,
we have three suppliers – Supplier A, B, and C. Then our response that would be, what is that customer
characteristic that’s important that we are trying to improve?

In this case it could be density and we could be looking at the mean from each of these three suppliers, in terms
of density. In addition, we will have other concepts such as our main effect and our interaction in future topics.

Now when we discuss about one-way ANOVA, we are looking at only one factor. For instance, we could look at
variation amongst different treatment means and we could look at the variation within treatments, or the
experimental error. But with the one-way ANOVA, we are only looking at one factor. With our two-way ANOVA, we
can look at two factors. For instance, we could have factor A and then factor B, and we could be testing our
experimental error. When we perform our ANOVA, we can get useful information about where the differences are
coming from, or where the variation occurs.
One-way ANOVA Test
We shall now consider an illustration to understand how we would actually conduct a one-way ANOVA test. Let
us assume that ABC Ltd. is a large insurance company looking at their automobile insurance claims. Such that
these claims are processed at five centers internationally. In this case we would want to look at what the impact
has been of a process improvement project that was initiated to determine the average processing time. We
would now want to know if it is the same for all five centers, or if there’s differences between those. Now this
starts by setting up our hypotheses.

Our null hypothesis would be that the means of all five centers are the same. And our alternative hypothesis
would be that the means are not all equal, that at least one is different. We want to know this with the 95%
confidence, and therefore our alpha value would be 0.05. Since we’re looking to see if our means are equal or not
equal, then we’re looking at both tails of the distribution, so it’s a two-tailed distribution. And then with this, we’ll
also need to look at the degrees of freedom that we will talk about next with our calculations.

With a one-way test for ANOVA, we have several assumptions – we are assuming that the samples are random,
independent, normally distributed, and have constant variance across all of the factors. Such that now we have
set up our hypothesis test, let us go through and look at how we would calculate our test statistic. Now it’s very
essential to note that as a Green Belt, we would need to be familiar with the table and the key terms, but we would
typically not have to actually perform any of these calculations that are in this table. We would typically be using
statistical software such as Minitab. Now within this table, we are looking at our sources of variation, between
Treatments and within Treatments. As part of our calculations, we would be looking at our sum of squares and
our degrees of freedom and then we are looking at our mean square and using this information to calculate our F-
test statistic. Now in order to understand some of these calculations, we need to understand some of the basic
variables. When we talk about the number of readings, since we have five different centers and we are collecting
seven pieces of information, our capital N is our number of readings, our lower case n is our number of readings
per level.

We are getting seven samples or seven observations per center, and k is our number of levels of treatments, and
we have five centers. So when we look at between treatments, our degrees of freedom are k minus 1, and since
we have five centers, it’s 5 minus 1 are 4. And within treatment we’re looking at our number of readings minus our
number of levels. And so our degrees of freedom are 30. Now using this information, we can find our critical
value. For our critical value we’re going to use our F distribution table. And our first degrees of freedom that we
looked at were four, since we had five centers and we lose one degree of freedom. And then we calculate our
second degree of freedom that comes from our within treatment, and that was a value of 30. And using that
information then, we get our critical value of 2.6896. And now we can use that information to interpret our
findings. So we’re going to compare our F statistic to our critical value.

The formula for calculating the degrees of freedom where the variation is within treatment is uppercase N minus
lowercase k. Where uppercase N is equal to number of readings and k is equal to the number of levels or
treatments. In the example provided, N is equal to 35 and k is equal to 5, giving a total of 30. Heading: Finding
Critical Value. The F-distribution table shows the degrees of freedom for variation between treatments in
columns and the degrees of freedom for variation within treatments as rows. The critical value is where the rows
and columns intersect.]

In this case, since our F statistic is less than our critical value, we’re going to fail to reject our null hypothesis. And
we could also use the same thing from looking at our p-value, and by comparing our p-value to our alpha value.
Since our p-value is greater than our alpha value, we also fail to reject our null hypothesis. This means is, we
could state that since a null hypothesis is retained, we could infer that the average processing times at the five
centers are equal to each other. In other words, there is no statistically significant difference between the average
processing time at those five centers, and any difference that we’re seeing is due to chance. As a Six Sigma
professional, we’re working with the organization to reduce the average call hold time. The organization has three
different locations. And for these three different locations we’ve taken five samples of the average hold time. And
at this point we want to see if there’s a difference between the average hold times amongst the three different
locations, to see if maybe there’s a best practice at one location.

For this we would input the data based on the location, and then we would select Stat – or statistics. And then
ANOVA and for this we want to do a one-way ANOVA to see if the means are equal. So we would select One-Way
ANOVA. It’s important to make sure that since we’re using columns for our data that we select the Response data
are in separate columns for each factor level. And then for our responses, these should be Location 1, Location 2,
and Location 3. We would also select Options just to make sure that our Confidence level is set at 95, so we
would select OK and then we would select OK again. By running the one-way ANOVA, we would have graphical
output as well as statistical output. So if we look at this statistical output, it’s difficult to see if there’s really a
difference since there is overlap in the confidence intervals. Then we would look at the session data. With the
session data for the one-way ANOVA, we’re comparing Location 1, 2, and 3. Our null hypothesis is that all means
are equal, and our alternative hypothesis would be that at least one mean is different. Our significance level is
0.05, and we’re assuming that the variances are equal for the analysis.

ANOVA Values

ANOVA values

Value Explanation Formula


N Number of readings
n Number of readings per level or treatment
Ti Total of readings in each level or treatment
T= Σyi = ΣTi Grand total of all readings
C Correction factor T sqr / N
K Number of levels (or treatments)
yi Individual measurements
DFFactor Degrees of freedom between treatment k–1
DFError Degrees of freedom within treatment N–k
DFTotal Degrees of freedom total N–1
SSB (Factor) Sum of squares between treatment Σ (Ti sqr / n) – C
SSTotal –
SSW (Error) Sum of squares within treatment
SSFactor
SST (Total) Sum of squares total Σyisqr – C
Mean square or the sum of squares, divided by the degrees of freedom. This calculation is
MS
done for both within-treatment variation and between-treatment variation.
SSError /
MSError The ratio of SSError to the corresponding DFError value
DFError
SSFactor /
MSFactor The ratio of SSFactor to DFFactor
DFFactor
MSFactor /
F-statistic Mean square (between) divided by mean square (within)
MSError
Chi-square apart from being used as a one-sample test for variance could also be used to determine whether
levels of one categorical variable are related levels of another categorical variable. With the chi-square test there
are several assumptions.

•We are assuming that data is randomly selected such that with each population, we need to have a sample size that’s
greater than or equal to n times 10.
•With chi-square test, we are using variables that are discrete or categorical. This test is not useful for continuous data, so
we need to make sure that our variables are discrete or categorical.
•We also need to make sure that our frequency count for each cell of a table is greater than or equal to five for this test.
Now with the chi-square hypothesis test, we have two possible hypotheses. The initial one is where the null
hypotheses states that there is no relationship between the variables of interest and the population. Then our
alternative hypotheses states that the variables are interrelated and they are dependent on each other.

Also we would measure this relationship in terms of the chi-square test statistic. Therefore if the chi-square test
static is equal to zero, then that means that the observed and the expected frequencies agree exactly, and they
are independent. If the chi-square test static is greater than zero, then that means that they do not agree exactly.
And the larger the value of our chi-square test statistic, then the greater the discrepancy between our observed
and our expected frequency. And this type of discrepancy indicates some effect or dependency between those
variables.

Conducting a Chi-square Hypothesis Test


Let us now consider the steps necessary to conduct a chi-square hypothesis test and we will go through this
using an illustration and exemplify all of the different calculations. Within the chi-square hypothesis test, it’s very
useful to set up the information in a table type format. This example, considers that we are a part of a team
within a pharmaceutical company, and we want to determine whether the distribution of side-effects of
medication varies amongst patient types. So we would have information about our various side effects and then
we want to compare that against our various types of patients. And then within this table, we’ve captured what
type of information that we’ve observed. For instance, using this information we can get values of the expected
frequencies.

In Six Sigma, the most commonly used tools in order to see if a change has made a difference in the process is
hypothesis test. A hypothesis test is a statistical analysis where we are looking to see if a result is statistically
significant as predicted. Consequently we would be using data from the processes to test a claim about a
population parameter. We will be using hypothesis test within Six Sigma, since we want to have data-driven
decisions, so as to provide evidence to support an opinion. Such that typically that opinion is that we have seen a
modification within the process. But now, we want to use data to support that there actually has been a change.
We are thereby going to use statistical significance to quantify or show whether or not that change is significant.
Also within hypothesis testing, we use sample data to draw inferences about the population based on the sample
data. For instance, we may use hypothesis testing within the Six Sigma projects to determine whether a process
improvement effort has actually reduced the proportion of defective items or we could use hypothesis testing to
investigate a claim that a new accounting software package has reduced the processing costs.

Also we could use hypothesis test to validate whether the loan processing time at a bank has decreased since
the introduction of an online application process. Thus within Six Sigma,

•In the Define phase where we determine what the issues are within our process. For this we set a problem goals and the
business case for taking on a Six Sigma project.
•In the Measure phase, we have started to identify what are some of those key variables or factors that are impacting our
process.
•Then within the Analyze phase, we want to look closer at those factors to determine if they have an impact on the mean
of our output or in the variation of our output.

While using Six Sigma as a methodology must ensure that our output is consistent overtime such that the mean
of our process is as closest to target as possible and the variation is reduced. In hypothesis testing, there are
several different types of hypothesis test.

•First type of test is a one-sample hypothesis test where we are looking at the means. Based on the distribution, we would
want to know is there a statistically significant difference between the mean of our process and that target industry
standard.
•Second type of test is a two-sample hypothesis test for the mean where we’re comparing the mean of one process to the
mean of another process.
•There are several more advanced statistical analyses for hypothesis testing such as the paired t-test, the test for
proportions; and the test for variances, and ANOVA.

Now the two test i.e., the test for variances and ANOVA, the analysis of variances, are used when we are trying to
reduce the variation within our process and to see if there has been a difference between the variations of our
process once we’ve a made change to the process.

Null and Alternative Hypotheses


Within hypothesis testing, one of the first things that we set up, are the hypotheses. This is what we are trying to
prove or disprove. Our null hypothesis is what we are saying and it expresses the status quo. If we think about the
null hypothesis, it assumes that any observed differences are due to chance or random variation. So in general
with a null hypothesis, which is our H subzero or H naught (Ho), we’re setting it up as two values are equal to
each other, one is less than or equal to another value, or one is greater than or equal to another value. And it
assumes that any observations are the result of chance or random variation. With our alternative hypothesis, it’s
expressed at H sub-a (Ha). This is what we are trying to test or we are trying to prove. With the alternative
hypothesis, we are assuming that the observed differences are real and it is not due to chance or random
variation. This usually happens when we change something in our process, and try to see if now there has been a
difference in the mean of our output. In which case, the alternative hypothesis needs to be mutually exclusive
from the null hypothesis. In other words, we want to make sure that as we are testing this, the values have to fall
in one category or the other. With our alternative hypothesis, we are testing than the opposite of the null
hypothesis; so we will be testing that the two values are not equal, that it’s greater than, or that it’s less than.

Using the hypothesis test, we are testing really two goals. We wish to see if there is a difference between two
values and so the output or the outcome of our hypothesis test could be that we reject our null hypothesis in
favor of our alternative hypothesis and this is essentially saying that our result is statistically significant. The
other option would be that we fail to reject our null hypothesis, such that when we fail to reject the null
hypothesis, we are stating that there is insufficient evidence to claim our null hypothesis invalid, or that the
alternative hypothesis is true. For instance, in a nursing home we could setup a null hypothesis that any
difference in processing time, based on improvements from a Six Sigma process are due to random variation and
chance, and essentially our improvement efforts have not made a difference. Therefore the null hypothesis would
be to state that there is no change based on the process improvement efforts. The alternative hypothesis would
be to show that the process has actually improved based on our Six Sigma efforts. If we know that the average
wait time at the outpatient clinic is 10 minutes, then we will setup our null hypothesis as being equal to 10.

Then our alternative hypothesis would be that it’s not equal to 10. It is essential important to note that we are
testing to see if there’s been a change and that is why we will setup our null and alternative hypothesis as equal
to or not equal to, because the change could have been to reduce or increase our wait time for our patients. Now
as we go through our hypothesis testing it’s also essential to note how we present the results and it may not be
as natural sounding to present the results in terms of our null hypothesis. But that’s how we would state our
results. For instance, based on our outcome we would either reject the null hypothesis or fail to reject the null
hypothesis.

Statistical and Practical Significance


It is extremely important to understand as a Six Sigma professional, the difference between statistical and
practical significance while we consider hypothesis testing. So when we consider the practical significance, one
must first understand the rationale for the decisions. Depending on the organization a small value might become
very meaningful to the business. For instance, if we think about health care or air safety where human health and
safety, or a catastrophic loss could be involved, then a small change might be very significant. Therefore it is
crucial to understand the test limitations and how those relate back to the rationale for the decisions.
Additionally, we need to understand what are our business goals, since there might be a seemingly large
difference, that might not mean much too some organizations, but it might be very important to the organization.
So we need to consider the business goals and how these changes relate back to what we are trying to achieve
for the organization. Now while we are testing for practical significance, there are several key questions that we
need to check. First, we need to understand will there be any appreciable gain or change, because when we’re
thinking about practical significance it relates back to the organization itself and how it’s going to impact the
organization based on that change. In addition, with hypothesis testing, we are using a sample to make
inferences about a larger population.

Now depending on the results, we need to go back and look at our sample size. If we had a smaller sample size
and also we are trying to infer about the population, then we might have to take that into account and rethink
about the expected results, because sometimes it could be too expensive to have a large sample size. Also it
becomes important to think about how simple it is to implement change. From a practical viewpoint, when we
think about the level of significance, we would want to see the impact it will have on the business and how
expensive or easy it is to implement. Additionally, since we are using samples to make that inference about the
population, we need to understand if the differences in the samples have real meaning. Now we must look back
to see whether the sample size, was representative sample or not. Another key question is we need to know if
there’s a strong financial case for change or not. Just because something is statistically significant that doesn’t
necessarily mean that making that change is going to be cost effective and make business sense for the
organization. Now by knowing that there is a change, we also need to think about how a potentially small value
may have a significant improvement to our process or vice versa. Now when we establish our practical
significance there’s three key areas that we need to consider.

First, we need to consider our confidence interval, and this goes back to our business case – what do we really
need to have in terms of our confidence interval to have a specific level of confidence within our process
improvement efforts. It’s important as a Six Sigma professional that we present a complete picture of the test
results and let the business managers decide based on the organization and the business requirements. This
allows management to consider the organization and the process knowledge to make an educated decision on
what’s best for the organization. Then we also need to choose the sample size carefully. A small sample size
means that there is the possibility that a large difference won’t be detected and deemed statistically significant
but we could choose a larger sample size, we might find a very small difference, but it could be interpreted as
statistically significant, even if it’s not practical. So it’s important we think about the strength of significance and
this is our p-value that’s going to be compared to our alpha value, and the actual difference between what we are
comparing.

Point and Interval Estimates


With the help of hypothesis testing, we are using a sample to infer information and data about the population, and
so it’s important to understand sample estimates of population parameters. There are several data
characteristics that will be used within our hypothesis testing and those include the mean, standard deviation,
and variance. When we look at the population parameters, each of these is represented by Greek letters.

For mean, its mu (µ); for standard deviation, its sigma ( ) ; and for variance its sigma squared.  For sample

statistic, mean is represented by x-bar ( ) ; standard deviation is represented by s; and variance is

represented by s-squared ( ) .  It is useful to understand the differences between them, because they’re
used to find a single value to estimate that population parameter. Now let us take a look at how that relates and
how we use point estimates from our sample to infer information about our population. To illustrate this further
let us take an example – let us suppose that we’re conducting a survey of 2,000 people from across the country
on whether they are for or against universal healthcare. Based on the 2,000 people who were surveyed, we had
1,100 out of those 2,000 that are for universal healthcare. The population parameter in this case is all the people
within the country, but the sample that we are using is 2,000 people, because that’s what we surveyed. The point
estimate then, is the proportionate people who are for universal healthcare.

In this example then, the statistic that we reached from that sample of the population is that 55% of all people in
the country are for universal healthcare. This gives us our point estimate that 55% of the population is in favor of
universal healthcare. Now it’s important to understand though, that there are some weaknesses as we use point
estimates. It is highly unlikely that the point estimates that we are using are exactly the same as the true
population parameters, because we’re using that information from our sample data to infer about the entire
population. That is the reason why it’s important that we have a range or an interval over some set of our values
that we’re using. We need to use information, so that we are able to estimate where the true population mean and
standard deviation are most likely to fall. This is where intervals – and particularly the confidence intervals – are
very useful, since they provide a better measure. For instance, if we are considering testing the confidence
intervals that gives us information on the variability of the sample statistics, and whether two samples originate
from the same population or whether a target falls within the natural variation of a process. So we are trying to
look at the likely values for the population parameter and we want a 95% confidence level.

Illustration: XYZ Ltd. is a pharmaceutical company that has a new drug to try – and the goal of the drug is to treat
high cholesterol. We want to be able to determine whether the drug improves cholesterol levels to the healthy
target of 200 milligrams and so our target value for this is 200 milligrams. Therefore based on the sample data
that we’re collecting, we can set a confidence level at 95%, such that 95% of our values should fall within a
specific range. Thereby our confidence interval calculates values between 195 and 210 would contain 95% of the
values. We will then use that information to check whether the target value falls within our confidence interval. If
it doesn’t, then the sample is from a population with a mean that’s different from the target value. But if the target
value does fall within the confidence interval, then we know that the sample is from a population with a mean
that’s the same as the target value. In the above case, the target value is 200, which is within our confidence
interval. So we can conclude at the 95% confidence level that the drug is decreasing cholesterol levels to between
195 and 210.
Also using Minitab we can find a confidence interval for a population parameter such as a mean. Such that by
using a data set we can interpret that result from a table or a graph. So let’s take a look at that using an
illustration. Suppose we have a Lean Six Sigma team at a home construction company and they want to analyze
the hardboard supplied by their two suppliers. So we have supplier one (S1) and supplier two (S2). The team
takes 15 samples of the boards from each of those suppliers and measures the mean amount of force in kilo-
Newton’s needed to break them. In the initial analysis the team wants to determine the confidence interval for the
mean, for both suppliers. They want to be able to interpret the results from a table or a graph. So within Minitab
there is several different ways that will do that. We’ll start with looking at Stat and then selecting Basic
Statistics and then Graphical Summary. And our variables would be C2 and C3 since this is supplier one and
supplier two. And we are looking for a 95% confidence interval. And then we would select OK.

Process of using Minitab Application


The Minitab application is open. There is an open worksheet – Worksheet 1, which consists of rows numbered
from 1 onwards and columns that are labeled as C1, C2, and so on. Before the first row is a header row. The
presenter is using three columns C1, C2, and C3. The column headings are Sample, Supplier 1 (kN), and Supplier
2 (kN), respectively. There are 15 data rows. Each row in the sample column has a value from 1 to 15. The
Supplier 1 column has a one value per row ranging from 80.61 to 88.41. The Supplier 2 column has one value per
row ranging from 73.63 to 83.77. There is also a Session window. The presenter selects Stat – Basic Statistics –
Graphical Summary. The Graphical Summary dialog box opens. The columns are listed. The presenter selects C2
Supplier 1(kN) and then clicks C3 Supplier 2(kN). They are added to the Variables box. The Confidence level text
field has the value 95.0. The presenter clicks OK.]

Using this information we can put Supplier 1 next to Supplier 2 and compare the differences. With our summary
reports they also give us information on our 95% confidence interval. And we can see that the mean for our
confidence intervals for Supplier 1 is slightly higher than it is for Supplier 2. Tthen we can also see by the results
of the summary report that our 95% confidence interval for the mean for supplier one ranges from 83.138 to
85.505. And for supplier two it is 78.569 to 81.391. And in this case we’re looking for which supplier supplies the
stronger product. From this analysis we would conclude that supplier one supplies a stronger product. Now that’s
one way through Minitab that we can get these results another way a second way to get these results is through
graph and then select interval plot. Since we have two suppliers we would select multiple Y’s and Simple. Also our
graph variables would be Supplier 1 and Supplier 2. And then we would select OK.

Also there are summary reports generated for Supplier 1 and Supplier 2, each in their own window. The summary
report includes data such as the mean, standard deviation, and variance. There is a chart displaying the mean. In
addition the 95% Confidence Intervals are displayed. For both the mean and the median – the values for Supplier
1 are higher than Supplier 2. The presenter closes both summary reports without saving them. The presenter
clicks Graph and selects Interval Plot. The Interval Plots dialog box opens. There are two categories –  One Y and
Multiple Y’s. Each category has the options Simple and With Groups. The presenter selects Simple in Multiple Y’s.
The Interval Plot: Multiple Y’s Simple dialog box opens. The column headings are listed. The presenter clicks
Supplier 1 and Supplier 2. The Graph variables box is populated with the values – Supplier 1 (kN) and Supplier 2
(kN). The presenter clicks OK. The Interval Plot of Supplier 1 (kN), Supplier 2 (kN) window opens.]

Then we would have our interval plot comparing Supplier 1 and Supplier 2 and this uses the standard deviations
to calculate the intervals. Furthermore we can see from Supplier 1, that the confidence interval is higher than it is
for Supplier 2. And again since we’re trying to select the supplier that has the stronger product this would indicate
that we would want to use supplier one, because the 95% confidence for the mean is higher.

Type I and Type II Errors


Now under hypothesis testing, we will be taking information from a dataset, a sample, to make inferences about
the population. Therefore under hypothesis testing, we are trying to make a decision based on this information
and hopefully that decision matches up with that true state of nature – what is actually occurring. Ideally with
hypothesis testing, we are getting accurate enough information that we can draw a correct decision. Therefore if
our null hypothesis is false, ideally, we want to reject the null hypothesis and come to that decision. In addition, if
our null hypothesis is true we want to come to the correct decision by failing to reject the null hypothesis.
However, unfortunately that’s not always the case, and we have different error types.

•Type-1 error (Alpha risk) – When we look at our type-1 error, that’s when the true state of nature is that our null
hypothesis is true, but we reject our null hypothesis. And this type of errors, notice a false alarm. This is the risk that
we’re willing to take in rejecting the null hypothesis, when it’s actually true, and this is called the alpha risk. Sometimes
it’s also referred as the producers risk because this is the risk of thinking that product is defective, when it’s actually good,
so we’re throwing the product away, so this is a risk that a producer takes. The common value for alpha is 0.05, which
means that we have a 5% possibility of committing a type-1 error.
•Type-2 error (Beta risk): If the null hypothesis is false and there is a failure to reject the null hypothesis, then a Type II
error or beta risk will result.

Types of Error
•A decision making table is presented. If the null hypothesis is true then rejecting the null hypothesis results in a Type I
error or alpha risk.
•If the null hypothesis is true and there is a failure to reject the null hypothesis, the correct decision power is 1 minus
alpha.
•If the null hypothesis is false and the decision is to reject the null hypothesis, the correct decision power is 1 – beta.
•If the null hypothesis is false and there is a failure to reject the null hypothesis, then a Type II error or beta risk will
result.
Under hypothesis testing, it’s crucial to understand the difference between our significance level and our
confidence level. Now when we look at plotting our values to understand where the significance level reaches, we
have a critical value and this is above this point where we have a threshold that explains the regions of
acceptance and rejection for that test statistic. When we look at our region of acceptance, this is a region where
the set of values of the test statistic for which we fail to reject the null hypothesis, when they fall in this range.
Then within our rejection region, these are the set of values of the test statistic for which the null hypothesis is
rejected. Depending on where our test statistic falls, this is where we fall within the rejection region or the
acceptance region. This is all based on our alpha value, which is our significance level, typically 0.05.

Here the confidence level we are looking for is 1 minus our alpha value, and then the case of it being 0.05, we are
looking for 95% confidence.

Power of a Hypothesis Test


For hypothesis testing, one of the key concepts to understand is the power of the hypothesis test. Where, the
power of the hypothesis test indicates the test probability of correctly rejecting the null hypothesis. This means is
we are coming out with a correct decision where the true state of nature is that our null hypothesis is false and
our decision is to reject the null hypothesis. Also in order to calculate the power of our test, the power is
calculated. This important, because the concept of power helps us to increase the chances of correctly rejecting
the null hypothesis and what we are trying to do is, is improve the likelihood of finding a significant effect when
one exists. So when we look at the power of a test, there are four key factors that affect the power of a test –
sample size, population differences, variability, and alpha level.


Sample Size: The first factor is sample size where it’s important to understand that as sample size increases, so does the
power of our test. So when we can increase our sample size, then we are more likely to get the correct result, and sample
size is the most important factor when we are talking about the power of a test and it’s dealt with in detail in the next
topic.

Population Differences: The other aspect with power, are the population differences. Here we want to understand what
the difference between our two populations is. When there is a bigger difference between our two populations, then the
power of our test increases because as there are bigger differences it’s much easier to clearly tell there’s a difference
between those two populations. However, when there’s a little difference between our two populations, the power of our
test decreases. Since we are reducing the probability that we’re going to be able to detect the differences within our test,
because there’s overlap between those two populations.

Variability: The third aspect with power is variability. Where variability is a measure of the dispersion of our values. So
when we have less variability within our samples, then we have more power. And when we have more variability within
our dataset, we have less power, and so these two factors are inversely proportional.

Alpha Level: Last and fourth factor of Power is alpha level, related to each other. It’s important to note that when we’re
setting up the hypothesis test, the alpha level is something that, as we’re running the tests, that we can set. So if we look at
two distributions that are exactly the same – by simply changing the alpha value from 0.01 to 0.05 – we can significantly
change the result of our hypothesis test. We can change from do not reject or fail, to reject the null hypothesis, to rejecting
the null hypothesis, simply by changing the alpha value. Now the most common value is 0.05, but decreasing or
increasing our alpha value again, can change the overall result of the test when we’re comparing our p-value against that
value. And so essentially, the higher the alpha value means the higher the power.
Determining Sample Size
Sample size is one of the most important factors when we talk about power of a test. It is important as a Six
Sigma professional when we are going through and performing our hypothesis test that we’re looking at our
sample size; since it is the most important factor and it’s something that can be easily controlled. When we look
at the sample size, as our sample size increases, the power of our test also increases. But it’s important that
when we’re looking and conducting our hypothesis test that we look at calculating the right sample; and it’s
something that should be considered very carefully.

Therefore if we decide on too many samples, then we could be wasting time, resources, or money. But on the
other hand if we collect too few samples, then we can actually have inaccurate results. So we want to be able to
find that balance between having too many and too little and getting the right amount. And we can determine the
right amount by calculating what our sample size should be. As part of this calculation, we need to look at our
margin of error, where E is our margin of error calculated using our critical value; standard deviation and our
sample size. And what we’re trying to do is minimize our margin of error.

Where,

Z is the critical value,

 is the standard deviation


n is the sample size

 is the standard error

Process of Hypothesis Testing


The key steps within hypothesis testing are –


Establish the hypothesis: This is the null hypothesis and the alternative hypothesis. In the first step we’re determining
what our null hypothesis and our alternative hypothesis should be. Our null hypothesis is that the parameters of interest
are equal and there is no change or no difference. The alternative hypothesis would be the opposite of this. And so our
alternative hypothesis would be that the population parameters of interest are not equal and the difference is real.

Test Consideration: The next step is to determine the test considerations. Now when we set up our hypothesis we also
need to make sure that we’re developing our test consideration. And for this there’s four key test considerations. We need
to make sure that we are selecting the appropriate test statistic based on what we are trying to test and then we need to
make sure that we’re identifying the appropriate alpha value based on what level of confidence we want in our results.
Then we need to determine the sample size and conduct sampling to make sure that we get a representative sample for our
data. And then we also need to make sure that we’re doing assumption checks.

Test Statistics: The third step is to calculate the test statistic. This step involves calculating the test-statistics process is to
calculate our test statistic. And the test statistic is what we’re going to use when we determine our critical or p-values at
our next step and we’re going to use this for comparison. There are several different types of test statistic. Most popularly
used are z-statistic and our t-statistic, which are the most commonly used test statistic. Now it’s important to make sure
that we know what type of test we are using and when we talk about our z-statistic; we need to make sure that our values
or samples are greater than 30. And our t-test is typically used when our sample size is less than 30.

p-value: The fourth step is to calculate the critical value or the p-value depending on which method that we use. The
fourth step in the process then, is to determine what method we’re going to use. We could use either the critical value
method or the p-value method. If we determine that we’re going to use the critical value method then we would use a
table to know the degrees of freedom. And then also we’re looking for what our alpha value is to determine that critical
value.

Result Interpretation: The final fifth step is to interpret the results. Once we have determined the method that we’re
going to use then we can interpret the results. For instance if we use the critical value method we are going to look at our
test statistic that we just calculated. Where we’re going to see how it falls. If it falls in the rejection region based on what
our critical value is then we are going to reject our null hypothesis. However if it does not fall within the rejection region;
if it’s outside of our rejection region then we fail to reject our null hypothesis. So our critical value method is a
comparison between where our test statistic falls and if it falls within a critical value, which is our rejection region. If we
use the p-value method then we are going to determine how our p-value relates to our alpha value. For example if our p-
value is less than our alpha value that means the test is significant so we reject our null hypothesis. However if our p-
value is greater than alpha, then we fail to reject our null hypothesis.
Note: In case of Interpreting Results, while using the p-value method, if p < alpha, then the test is significant –
reject the null hypothesis. If p > alpha, then do not reject the null hypothesis.

One- or Two-tailed Hypothesis Tests


Now as we set up our hypothesis test, it is very important to make sure that we’re running the right test and we’re
choosing the appropriate type of hypothesis test which is the second step in the process. As we are selecting the
right test, we need to understand if we have a one-tailed test or a two-tailed test.

One-Tailed Test to the Right


With a one-tailed test, we could have a one-tailed test to the right. And with this, what we’re testing for is to see
whether or not our values are different from each other. For instance, our null hypothesis would be that our µ, our
mean, is less than or equal to some other value, and therefore our alternative hypothesis would be that our µ is
actually greater than µo. Now we are looking for in here now is to see where does our test statistic fall? So based
on our calculation, if we’re doing a t-test or z-test, we plot our test statistic to see if it falls into our reject region, or
if it falls within our acceptance region.

One-Tailed Test to the Left


Also we could also have a one-tailed test to the left. So when we are looking at these values, we’re trying to see if
something has either reduced or it’s increased. In this case, we would set up our null hypothesis such that our µ is
greater than or equal to a hypothesis value. And in this case, when we look at our test statistics, it’s actually
greater than our critical value and it falls within our acceptance region. Due to which we fail to reject the null
hypothesis.

Two-Tailed Test
The next type of test we could have is a two-tailed test. Typically, when we’re setting up a two-tailed test, we’re
looking to see if there is a change. So our null hypothesis would be that the means of our two values are equal to
each other, and then our alternative hypothesis would be that the means are not equal. Now since our values
could be not equal that means it could fall above or below that value. This is the reason why we would do a two-
sided test, because the values could fall in either rejection region. In this case, our test statistic falls within the
acceptance region, and therefore we would fail to reject the null hypothesis.
It is very essential to note that most tests in industry and real life situations that fall within this type, since we
want to see if there has been a change based on something that we have introduce from one of our process
improvement projects and this is where a majority of our tests fall.

Critical Value and P-value Methods


After determining the test statistic and the alpha value, the next step we need to do is to decide which method
that we’re going to use to compare the desired confidence level to the test results that we’re going to apply. So
this is either going to be the critical value method or the p-value method.


Critical Value Method: For the critical value method, we are comparing our test statistic to our critical value. Then we
are looking to see how our test statistic compares to that critical value – this determined based off of our alpha value. The
critical value is really what divides the curve area into our rejection region and our acceptance region. And if our test
statistic falls in the rejection region, then we’re going to reject our null hypothesis. However, if our test statistic falls
within our acceptance region, then we’re going to fail to reject our null hypothesis. The benefit of critical value method is
that it gives us a graphical representation of how the process falls. The same method would apply with a two-tailed test.
The difference would be that we have rejection regions on both sides, both ends of the distribution. The next step would
be to use our z-distribution table to find our critical value for a given alpha value, if we’re using a two-tailed test. And just
because it’s a two-tailed test.

p-Value Method: Now the p-value method is typically the most commonly used method within hypothesis testing, and
we will see it frequently in statistical software such as Minitab, and it gives the test results in the p-value format. When
we do this, we’re comparing our alpha value or alpha range with our p-value. Now if our p-value is greater than or equal
to alpha, then we fail to reject our null hypothesis. Conversely, when we compare our alpha value to our p-value, if our
alpha value is less than or equal to alpha, then we reject our null hypothesis. And a simple way to remember that is if p is
high, null will fly; if p is low, null will go.
One-sample t- test and z-tests
In hypothesis testing, one of the most commonly used types of test is test for means. So when we’re testing for
means, we can have a one-sample or two-sample test, and depending on the types of tests, there’s various
statistics that we’ll use or test statistics that we’ll use for those. Following are the ways of selecting the test
statistics for conducting the sample test –

•When we’re performing a test for means, when we have a one-sample test, if we have a known population variance or
our sample size is greater than or equal to 30, then we’re going to use our z-test.
•In Case our population variance is unknown or our sample size is less than 30 then we use a t-test.
•For our two-sample test, if our population variance is known or our sample size is greater than equal to 30, then we’re
using our z-test still.
•With our two-samples, if our population variance is unknown or our sample size is less than 30, we’ll have a situation
where either our variance is assumed to be equal or unequal, either way, we’ll use the pooled t-test.
•When we’re setting up our test for means, if our variance is unknown or our sample size is less than 30, then we’re going
to use a t-test.
•If our variance is known and our sample size is greater than or equal to 30, then we’ll use the z-test.

We must aim to select our test based on our sample size. For example, let’s take a look at when we would use a
one-sample or a two-sample test. We could use a one-sample test, but if we find out that the average number of
defects per unit produced is equal to its historical average of three, so we could use our hypothesis test to test to
see if that historical average has changed over time. And then we could use our two-sample when we’re
comparing the quality of products from two different vendors.

Key assumptions to ensure random data


Some of the key assumption is to make sure that we have random data.
•First check to ensure random data is to create a run chart. In order to do this we would select Stat, and then select Quality
Tools, and then select Run Chart. Our data’s arranged in a single column and therefore we would select delivery time for
our first input and then our sub-group size is one since we only have one sample for each sub-group, and then we would
select OK. Now using this information we would look and see that we’ve got random data and so our data would pass the
first key assumption test.
•Second test is that we want to make sure that we have a normal distribution. To do that we would select Stat, and then
Basic Statistics, and then Graphical Summary. Our variable would be delivery time and we would select our confidence
level at 95%. Then we would select OK. And then we would get a summary report for our delivery time. To test for
normality we would have our p-value which is greater than 0.5 and so we would pass our test for normal. Then we would
also be able to look at some of the information for our 95% confidence interval. Now using this information we would
know that our data’s normal and we have random data from our run chart. From this now we could set up and run our one
sample t-test.

Now in order to do that we would select Stat – Basic Statistics, and then 1-Sample t test. We need to make sure
that we select one or more samples, each in a column, and then our variable is delivery time. Here we are
performing a hypothesis test so we would check the box to perform hypothesis test and our hypothesized mean
is 5. Also we are using the one sample t-test to see if we have had a change in our mean. And then we would
select OK. Then using this information our hypothesis test was that the mean is equal to 5. And our alternative
hypothesis is that the mean is not equal to 5. We would have the information on 20 is the number of sample, our
mean standard deviation, the standard error about the mean, our 95% confidence, our t-value and then our p-
value. And since our p-value is 0 we can state that there has been a change in the mean of our process. Now it’s
important to note that in real life Six Sigma situations statistical modeling and calculations are much more
complex, and they are typically done using statistical software like Minitab. And so this was a little bit more
straightforward.

It is essential to note that since Six Sigma certifications require us to perform calculations using a simple non-
programmable calculator and not using statistical software; we’ve given more straight forward scenarios as
examples to help we understand the concepts and so that we can do the calculations manually. So we may find a
slight numerical difference between the test statistics, p-values or critical values that are being calculated using
software versus those that we calculate manually. And this difference is usually caused by rounding it is not
usually statistically significant enough to change the final result.

We may use this formula to calculate degrees of freedom when we have pooled samples.
We may use this formula to calculate degrees of freedom when we have non-pooled samples.

Two-sample T-Test
Now that we have discussed the one-sample test for mean, let us take a closer look at the two-sample test for
means. With a two-sample test for means, these are very useful in comparing the means of two samples. These
are commonly used when we are comparing two different products or two different processes that are producing
the same product, to see if there is a difference in the means or the outputs; or for comparing two different
suppliers. When we do the two-sample test for mean, the difference is whether or not we’re looking at pooled or
non-pooled statistical tests. Here there are several assumptions that we need to make about our samples such
that they must be independent, random, follow a normal distribution, and have either equal or unequal variances.
With our two-sample test, the differences depend on whether or not the population variance is known or
unknown. If the population variance is known or the sample size is greater than or equal to 30, then we use the z-
test. If the population variance is unknown and the sample size is less than 30, then we’ll use the pooled t-test.
And this is used when our variances are assumed equal.

Test Statistic Formulas


Purpose: Use this job aid for support in determining which test statistic formula to use when performing
hypothesis tests for means, and also when calculating degrees of freedom for pooled and non-pooled samples.

Flowchart to determine which formula we need to calculate the test statistic


Hypothesis test for means flow chart
Use this formula to calculate degrees of freedom when we have pooled samples.

Use this formula to calculate degrees of freedom when we have non-pooled samples.

Air Filter Case Study


We use this learning aid to answer the questions in the air filter comparison case study.

Let us assume that you work for a custom paint shop and have two suppliers of industrial air filters. Each supplier
presents us with a different cost breakdown. The filter’s thickness is the main quality concern when determining
which product gives us more value for the money. We decide to randomly select 10 samples from each supplier.

Air Filter Data Table

Supplier A Supplier B
79 86
78 82
82 91
85 88
77 89
86 85
84 91
78 90
80 84
82 87
Assuming that the populations are normally distributed and the samples are independent, we want to test the
hypothesis that supplier A’s filter thickness is less than supplier B’s filter thickness.

We set alpha at 0.10.

Ho: Supplier A’s filter thickness = supplier B’s filter thickness

Ha: Supplier A’s filter thickness < supplier B’s filter thickness

This indicates a one-tailed test to the left.

Table of information
n mean standard deviation
Supplier B 10 87.3 3.06
Supplier A 10 81.1 3.18
Multi-variation Analysis
Multi-varied analysis is a tool used within statistical analysis that involves observation and analysis of more than
one statistical outcome variable at a time. Multi-varied analysis is particularly useful within Six Sigma when we
have multiple input factors that we are trying to understand how they impact the outcome of our process.
Particularly within the Analyze phase of Six Sigma, multi-vari analysis is used to further explore those different
input variables – the Xs – and understand how they impact the output of our process that we’re trying to improve
for our customer. The Xs that we’re analyzing are the Xs that were identified during the measure phases. Now
multi-vari analysis is also sometimes called multi-varied analysis. It’s used as a graphical technique because it
provides a way to view and analyze the effects. For instance, we can try to identify where the variation is coming
from within two samples. Now by graphically looking at the two samples, we can see which one has the most
variation between the two. The usefulness of multi-vari analysis, that makes it different than other tools, is it can
be used to determine where that variation is coming from.

There are three key ways in which multi-vari analysis helps with Six Sigma and process improvement.

•Pattern of Variation: The first way is how it displays pattern of variation. The multi-vari chart is very useful by showing
these patterns of variation because it helps to identify various trends within our data. We can also, as a Six Sigma team,
start to identify relationships and gain insight on where the variation is coming from.
•Source of Variation: The second key way is by identifying the sources of variation. As a Six Sigma team, it’s very useful
to start identifying which Xs are the key sources of variation. And the multi-vari chart helps to identify where the various
sources of variation are coming from by looking at those key factors to see which one has the most variation within the
process that we’re trying to improve. Thereby by identifying those variables that have the biggest impact, we can help
drive that process improvement.
•Reducing Variation: The third key aspect involves reducing variation. Multi-vari charts help us to understand where we
need to reduce or eliminate variation during that next step of the DMAIC methodology in our Improve phase. Because
they help offer inside about what type of variation exists, when and where the variation occurs and which – and that helps
to provide knowledge for the Six Sigma team to reduce or eliminate variation in the next stage of the to make
methodology.

Steps in conducting multi-vari analysis


There are four key steps in implementing our multi-vari analysis.

•The first step is to determine the variation and we do this by observing the process. Now by examining the Xs or input
variables, so that we can start to understand what those key variables are and then as a Six Sigma team we would use this
information to determine the types of variation that we’re seeing that are input to our process.
•The second step of a multi-vari analysis is to create a sampling plan. We want to make sure when we create the sampling
plan that were setting it up, so that we’re getting a representative sample. Also we are capturing all of the necessary
information about our process.
•The next step is to collect the data. Once we have collected the data we move on to the third step which is to analyze the
data. In this step, we would go through and we would plot the data. So we want to look at our data graphically to look for
trends within our processes – to look for trends within our processes and see where we could – and as a Six Sigma team
identify which of our Xs have the most variation. So we could use the data that is been plotted to analyze the data and
then interpret the various charts.
•In the fourth step of creating our multi-varied analysis is to interpret results. So we would want to verify what the chart is
showing us and that we have got collected the right data and then we would interpret the results and then based on the
results if we need to do a further analysis to understand the next set of variables that also induce variation to our process
we would repeat our multi-vari analysis.

Sampling Plans for Multi-vari Analysis


The primary aspect of our multi-vari analysis is to set up sampling plans. The sampling plans are how we’re going
to go about as a Six Sigma team collecting the data. They are very useful in helping us as a Six Sigma team
identifies the causes of variation. Sampling plans are important when we go through and do our multi-vari
analysis because we are testing to see where variation comes from. It is very essential that as we do this that we
set up our sampling plans such that we are getting representative data. Now, when we talk about representative
data, we need to look at the process itself. More important is to understand whether we are running the process
over multiple shifts or not. If so then we need to collect data from each shift. Also, in terms of the process itself,
we need to make sure that we’re getting information that accurately represents the process itself too. So if we
think about a pane of glass, we wouldn’t take just one measurement on the pane of glass. We might want to
consider it in terms of how the glass has different quadrants. So we have one quadrant, two quadrants, three
quadrants, and four quadrants. With the four quadrants on our glass, we would want to make sure that we’re
getting samples from each of those quadrants or multiple points on that piece of glass.

This gives us more information about where the variation is occurring as well. Then, if this is an operation that
runs over three shifts, by collecting information on each of the shifts that gives us more information as well over
the time and how things are occurring.

There are several guidelines for creating our sampling plan.

•The first one is we want to make sure that we are using a sampling method that is normal for the process. We don’t want
to induce any additional variation by changing how we are collecting data. So we want to look at the process itself and
how it normally operates. Also we want to make sure that we are not interrupting the process to collect our data. In this
way we’ll get more accurate information.
•We also want to ensure that we are sampling the processes on a structured basis. This means that we have got a regular
set frequency that’s going to give us enough information that’s representative of the process itself. For instance, if we are
running four days a week, we want to capture information on each day of the week. Also, if we’re running a six day a
week operation, we would collect information on each day of the week and over each shift. And we would want to make
sure that we have got a systematic structured approach for collecting that information. In keeping that in mind, we also
want to make sure that we’re using new data rather than using historical data. If we use historical data, we’re more likely
to try and make it fit.

Now, we may not have all of the information that we really need from an accurate and strong sampling plan. So
we would want to go through and develop a sampling plan that represents our current process and collect new
data to make sure that we’re collecting the right information.

When we set up our sampling plan, we also want to make sure that we’re selecting at least two positions per part.
When we collect data only on one point in the part, we’re getting limited information. So if we think about a
process that might be extruding a artificial component, there might be subtle differences between the far right
and the far left versus the middle of the part. So we would want to be able to see any positional type data. Now,
when we are collecting that information it’s also important to select at least three consecutive parts for each time
period. This gives us information of the average output during that time and gives us a way to verify that we’re
getting the accurate readings. Also, we want to make sure that we’re selecting at least 20 predetermined time
periods. This allows us to get trend analysis so that we can see what’s happening over time and really look at
those trends.

Steps in creating a Sampling Plan


•The first is to determine the scope. We want to understand our process and how it operates and make sure we’re
accurately representing the scope of the process as we set up our sampling plan.
•Once we have our scope determined then we need to set our timeframe. Using our timeframe we can determine how
often we need to select parts to make sure that we have sufficient data.
•The third step then is to plan the actual data collection. This involves determining the best way to collect the data so that
we are not interrupting the process and inducing variation into the process.
•Then finally, once we have started collecting the data, we want to look at some of the initial samples and start evaluating
those to make sure that we are verifying that we’re getting accurate information. Once we have that we can go through
and collect the rest of the data and then evaluate that data.
Types of Variation
In Six Sigma, it is essential to study and understand the data to determine the type of variation that’s present.
There are three key types of variation –

• Positional Variation: The first type of variation is positional variation and when we talk about positional variation, this
is also known as within part variation. It is known as within part variation, since this is typically variation that occurs
between the same parts. We would see this across the same part. Consider an example to understand more of what
positional variation means – Let’s suppose that we’re in a Six Sigma team at a glass manufacturer. Where we found that
the finished sheets of glass were varying slightly in width from one end to the other. Positional variation is not indicated
since when we look at our product – our sheet of glass – within every piece of glass the bottom measurement is always
0.08 inches less than the top measurement. So, this is an indicator that it’s positional variation, because the variation
occurs within part.
•Cyclic Variation: The next type of variation is cyclical variation. Cyclical variation is also known as part-to-part variation
because the variation occurs between multiple parts or it varies from piece-to-piece or unit-to-unit. It could also be
variation from one operator to another operator or from machine-to-machine. Now we continue with the example of the
glass manufacturer above, let us suppose that the investigation then found that the variation occurred only in the glass
produced by our night shift operators. So that is an indicator that we are having a difference based on cyclical variation,
because the variation is greater between the glass produced per operator and not within each piece of glass.
•Temporal Variation: The third type of variation is temporal variation. This is also known as our shift-to-shift variation or
variation that’s occurring over time. This is different than our piece-to-piece or part-to-part or within part variation
because this could happen over a specific time. Now when we consider the glass manufacturer example such that we
suppose at this time the investigation shows that the variation increases its specific times of the day. Further investigation
by the Six Sigma team shows that these times are toward the end of the worker’s shift. This is an indication that the
temporal variation is a factor, because something time related is occurring and it’s causing the variation.

Now let us take a closer look at what a multi-vari chart is and how it’s used during a multi-vari study. With our
multi-vari chart, our x-axis is capturing that time or the sequence and then our y-axis is looking at our
measurement scale. Here we are trying to capture our variation over time and we are using long lines to indicate
where the variation is occurring. It provides a nice quick visual, based on the length of the line, to see where the
most variation is coming from. So we can use this information to compare the variation amongst multiple
sequences or times that are under consideration. Now let us take a closer look at how information would look
that we’re capturing over time. With multi-vari charts, we’re capturing information on positional variation, cyclical
variation, or temporal variation. With positional variation we’re looking at where the greatest variation is and the
length of the vertical lines. Where, our first line is the longest and so that has the most variation. Within cyclical
variation, we’re looking for the greatest variation and the position of the vertical lines on the y-axis. And so we’re
using more information about the location of the lines. And then in temporal variation, we’re looking for where
there is a greatest variation over time and we can compare the differences between, for example, first shift and
second shift to see where there is most amount of variation.

Interpreting Variation Results


As discussed earlier, there are three types of variation and those include positional, cyclical, and temporal. Each
of these types of variation allows us to understand more of where the variation is coming from and when we look
at our multi-vari charts, it’s important that we are able to interpret the results of the variation using different
graphical examples.

In Six Sigma, it is very essential, as we go through and understand the situation with our process and how our
different factors impact our output, to look at tools such as our correlation. The tool considered useful within Six
Sigma involves determining our correlation. Correlation involves determining the relationship between our Xs and
our Ys or inputs and our outputs. We can use this information to determine quantitatively if there is a relationship
between our Xs and our Ys. In case, there is a relationship, how strong that relationship is.
Additionally, with Six Sigma, we can figure out which of our Xs, which of our inputs, have significant relationships
with our Ys and that can help us determine what those key variables are, those key factors that we want to
investigate further. But it is essential to note that correlation determines only if there is a relationship. It does not
look for a cause and effects. So correlation does not mean that there is causation. It simply means that there is a
relationship between the two variables.

Appropriate use of correlation analysis


•First application is when we are trying to relate our Xs, or inputs, to continuous Ys or outputs. Once we know if there is a
relationship, we can determine what our key input variables are, our key Xs, based on the strength of the relationship
between our input and our output variables.
•We can also use correlation analysis if we want to understand if there is a relationship between two input variables, two
Xs that is part of our Six Sigma project. Because we might have a relationship between two variables that if we change
one factor it impacts the other factor. At this point of our Six Sigma project it’s important to understand and quantify those
relationships. So now, let’s take a look at where we can use correlation in Six Sigma.

For example, if we are looking at a manufacturing process and we’re trying to increase or improve the surface
finish of a bore that we’re drilling, we may want to see if there is a correlation between the speed and feed rates
of our equipment, our machines, and our surface finish of our bore. Within service industries we may think that
there is a correlation between price or communication and the customer’s measured service quality or the
customer’s perception of the level of quality.

Scatter Diagrams for Correlation Analysis


One of the key tools used within correlation analysis is a scatter diagram. This is used to interpret really the
correlation, the relationships between two factors. A scatter diagram is used with bi-variate data, meaning that
we’ve got two different types of data. This is typically represented as X, Y for our data because we’re using a
variable on our x-axis and we’re trying to understand that there is a relationship with that Y variable. We are
measuring the relationship based on the best fit line between each data point from our X and Y and that line.
Then based on the slope of the line, we can really see if there is a relationship between our X and our Y. While
evaluating we use our scatter diagrams, to analyze them based on three specific characteristics – direction, form,
and strength.

Direction
Directions are extremely important in scatter diagram because it shows the positive or negative correlations. If
we have our best fit line that’s rising from the left to the right that means that we have a positive correlation.

However, if our line is falling from the right to the left then that means that we have a negative correlation. With
correlation, we’re talking about a value between negative one and one. And so, whether or not our correlation is
positive or negative, doesn’t impact the output so much because we can tell that there is a relationship or not. But
this directionality gives us information about how that relationship works.
For instance, with a negative relationship, as X increases, that means our Y is decreasing. And so, understanding
the relationships between those two variables is very important.

Form
Now the other aspect that we must consider is whether or not our form is linear versus non-linear. Where, Linear
means that we have a straight line relationship between our two variables which is much easier and much more
straight forward when we are talking about a linear relationship. But if we have a non-linear relationship then we
would have a much more advanced mathematical equation to represent the relationship between our X and our Y.

Strength
The third aspect of correlation analysis is to understand the strength of the relationship. What we are trying to do
is determine what is that best fit line between our X and our Y variables. When these two variables are not related
then there is no way to really draw a line between these two variables. So when they are not related we have
plotted scattered, but there is no good way to really draw that line, that connection. When we have a strong
relationship, it’s very easy to draw that best fit line between those variables such that we have a clear line that’s
moving upwards. And if we think about that in terms of a relationship, if our slope equals to one then we have a
direct relationship, a one-to-one relationship between our X and our Y. If we have a weak relationship then our
slope is minimal. And so, that tells us that there is a weak relationship between our X and our Y. Scatter plots can
be used to show the relationship between two continuous variables.

Correlation Coefficient
For correlation analysis, one way to determine the correlation is through the Pearson’s coefficient. In this case, we
are solving for a value of r, which represents our correlation coefficient. As we are looking at our correlation, we
are looking at bivariate data. We are trying to analyze if there’s a relationship between x and y, our paired data set.
And r tells us the strength of our relationship. In order to calculate r, we need information on each of the individual
values. The first variable, which is our Xs and the second variables represent our individual variable which are the
Ys. And then n represents the number of pairs of data in the data set. When we solve for r, r is a ratio and it should
be between negative 1 and 1.

Once we calculate our correlation coefficient, it’s important to understand what that means and some of the
considerations to take into account for different ‘R’ values. So let’s first look at some of the different correlations.
It’s critical to understand the context. When we talk about our ‘R’ value, our ‘R’ value is anywhere between 1 and
negative 1. The difference is, with the value of 1 the slope is sloping upward into the right. With the value of
negative 1 our slope is sloping downward to the right. In the middle we have 0 and 0 means there is really no
relationship between the two values.

Now it is important to know if a relationship is strong or not? And that really ties into the context. We need to
understand our specific problem and what are we looking at. If we’re looking at something that’s maybe medically
related, depending on again the context, we might want to know that there is a strong relationship where there is
a value of 0.9 or higher, or negative 0.9 or lower. It’s also important to understand that the Pearson’s correlation
coefficient only works for a linear relationship. It does not apply with non-linear relationships. And the third key
aspect to understand is that the correlation coefficient is highly sensitive to outliers or extreme values within the
data. If we have an outlier, we need to determine if it’s a true accurate number, if not we should remove that
number to get a more accurate relationship.

Causation
Now when we talk about Six Sigma and correlation analysis it is very important to understand the difference
between correlation and causation because these are really commonly confused with each other. When we talk
about causation, we are primarily trying to understand, which Xs, which of our input variables, cause the Ys, our
output variables, to happen. Therefore just because we have a high correlation means that we have a relationship
between the two variables. But correlation does not equal causation. It doesn’t mean that one variable causes the
other one.

Some of the considerations regarding causation are –


•Causation is asymmetrical and it does not imply causation. This means that if X correlated with Y, that is the same as
saying Y is correlated with X. While correlation is symmetrical, causation is asymmetrical or directional. It’s one way or
the other.
•In addition, causation is not reversible. One item causes another item to happen or, in other words, X causes Y. It flows in
one direction. Now if we think about an example of a hurricane that causes the phone lines to go down, we can’t reverse it
and say Y causes X. The phone line going down doesn’t cause a hurricane.
•Causation can be difficult to determine. There might be a third unknown variable which is a case of a third unknown
variable could be the actual cause.

Situations and variable relationship are often very complex and require more analysis. Correlation though can
help point to causation and so strong correlation is a good place to start. Correlation can help point to where
there is causation. In finding strong correlations between variables we will also rule out data that is unrelated.
This helps the Six Sigma team to focus on the relationship to determine what the causation is.

Some of the general or most common mistakes are –

•Genuine causation exists when there is clear and uncomplicated data that supports the proposal that X causes Y.
•A common response to an unknown variable occurs when both X and Y react the same way to an unforeseen variable.
Then confounding is when the effect of one variable, X, on another variable, Y, is mixed up with the effects of another
explanatory variable on Y.

Testing Statistical Significance

P-value
We shall now take a closer look at the difference between correlation and causation and why it’s important as a
Six Sigma team that we look at the statistical significance or the p-value of a correlation coefficient to interpret
that correlation in terms of our statistical significance. Now when we talk about the statistical significance of our
correlation, it is essential to note that, when we did our multi-vari studies in our correlation analysis, this really
helps the Six Sigma team to narrow down the number of inputs. And it also helps to provide insight to the
correlation between the variables. But we want to make sure that we’re also moving forward and we’re focusing
on the right variables. Since by doing this we want to make sure that our Six Sigma team is taking those right
variables through further with the analysis and rather than spending time and money and effort on variables that
are not statistically significant in terms of correlation. It’s also important to note that there are some correlation
coefficients that are subject to chance. So we can use statistical significance to test that. That is the reason why
it’s important as a Six Sigma team that we go through and we determine that significance. Now when we define
our statistical significance, this is our p-value.

Just because we determine their significance, it doesn’t necessarily mean that it’s an important variable. What we
want to do is provide a statistical evidence of the relationship between these variables. One way to do that is
looking at our p-value. For example, if our p-value is less than 0.05, if our alpha value equals 0.05 then we can
determine that there is significance. Here we are trying to do is answer two key questions by determining our
statistical significance of our correlation coefficient. The first primary question is we want to determine if the
correlation is by chance or accident. For instance, if we have a statistical significance, our p-value of 0.05, and it
indicates the negative correlation of negative 0.70 between a call center representatives experience in the call
length, has a probability of less than 5% of occurring by chance. The other question that we want to answer is
what are the chances of finding our correlation value other than the one estimated in the sample? This is
important for Six Sigma team because we could answer questions. Such as, what is the chance of finding a value
other than negative 0.70 in any sample when the correlation is different in the population? This can be
determined by the Six Sigma team once the statistical significance is known.

Regression analysis
Linear regression is defined as a methodology used to model the relationship between two variables. It is very
important in terms of Six Sigma and data analysis since it helps to gives us a quantifiable or mathematical
relationship between two variables. In simple linear regression, we are looking for is that line of the best fit
between each data point. Here we are looking for that relationship between our X variable and our Y variable.
Then based on this line and the slope of the line, we’re able to get a formula that provides us predications based
on that equation. And essentially what we are doing is looking at their relationship with one Y value for one X
value, based on those paired values. Typically what we’re looking for is, with simple linear regression we’re
looking for that one kind of output variable with multiple linear regressions, where there is multiple Ys, multiple
outputs. For instance, we could look at height versus weight would be simple. But on the other hand height, age,
gender versus weight would be multiple linear regression. Now there are several elements to the formula for
calculating simple linear regression. These elements include beta sub 0 which is the Y-intercept when X equals 0.
And then beta sub 1 is our slope of our line. And then ‘E’ is our error term.

When we are calculating simple linear regression, we square the residuals and add them up to see which line has
the least amount of residuals that comes from finding the best fit line. Since there is a potential to have multiple
lines that are drawn within a scatter diagram, and that’s very cumbersome to find that best fit one, there is a way
to find the line using least squares. Also called simple linear least squares regression. And the elements of this
equation are fairly similar. Our beta sub 0 hat and beta sub 1 hat represent the estimates of the true beta sub 0
hat and beta sub 1 hat. Where, beta sub 0 hat is the value of the Y-intercept and beta sub 1 hat is the value of the
slope. And the key difference here is that we’ve taken out the error term.
Hypothesis Testing for Regression Statistics
Hypothesis testing is primarily used within regression analysis to determine whether or not there is a significant
linear relationship between the two variables. Essentially, when we are performing our hypothesis testing, we are
trying to determine the slope of the line between our two datasets, and two data points i.e., our X values and our
Y values being used. The objective behind using hypothesis testing and regression analysis is to look at the slope
of the line. If we find that we have essentially a slope of zero, meaning that there is no relationship, then we can
conclude that there is no significant relationship between the independent and the dependent variables. However,
if our slope, our 1 is not equal to 0 then that means, based on this relationship between our independent variable,
our Xs, and our Y, our dependent variable, if it’s significantly different from the 0, we can conclude that there is a
significant relationship between the independent and dependent variables.

Conditions to carry out hypothesis testing using regression analysis 


•Distribution of the Y values, the dependent variables, must be normal.
•Distribution of the Y values must have constant variance.
•Y values must be random and independent.

Steps in performing a hypothesis test


•The first step is to define the business problem. This will help us set up the hypothesis test to determine what the
appropriate level of significance is. Whether it’s an alpha value of 0.05 or 0.01 or another value.
•We are then going to use that information to establish our hypothesis. This is what we’re trying to test.
•Using this information, once we have established our hypothesis, we can determine the test parameters, calculate our
test-statistic, and then interpret our results where we compare our test statistic to our alpha value.

Process of Hypothesis Testing


Hypothesis testing is a scientific process of testing whether or not the hypothesis is plausible.  The following
steps are involved in hypothesis testing:

First step – State the null and alternative hypothesis clearly. The null and alternative hypothesis in hypothesis
testing can be a one tailed or two tailed test.

Second step – Determine the test size. This means that the researcher decides whether a test should be one tailed
or two tailed to get the right critical value and the rejection region.

Third step – Compute the test statistic and the probability value. This step of the hypothesis testing also involves
the construction of the confidence interval depending upon the testing approach.
Fourth step – Involves decision making. This step of hypothesis testing helps the researcher reject or accept the
null hypothesis by making comparisons between the subjective criterion from the second step and the objective
test statistic or the probability value from the third step.

Fifth step – Draw a conclusion about the data and interpret the results obtained from the data.
There are basically three approaches to hypothesis testing. The researcher should note that all three approaches
require different subject criteria and objective statistics, but all three approaches give the same conclusion.

First approach is to test the statistic approach.


•The common steps in all three approaches of hypothesis testing is the first step, which is to state the null and alternative
hypothesis.
•The second step of the test statistic approach is to determine the test size and to obtain the critical value.
•The third step is to compute the test statistic.
•The fourth step is to reject or accept the null hypothesis depending upon the comparison between the tabulated value and
the calculated value. If the tabulated value in hypothesis testing is more than the calculated value, than the null hypothesis
is accepted.  Otherwise it is rejected.
•The last step of this approach of hypothesis testing is to make a substantive interpretation.

Second approach of hypothesis testing is the probability value approach.


•The second step of this approach is to determine the test size.
•The third step is to compute the test statistic and the probability value.
•The fourth step of this approach is to reject the null hypothesis if the probability value is less than the tabulated value.
•The last step of this approach of hypothesis testing is to make a substantive interpretation.

Third approach is the confidence interval approach.


•The second step is to determine the test size or the (1-test size) and the hypothesized value.
•The third step is to construct the confidence interval.
•The fourth step is to reject the null hypothesis if the hypothesized value does not exist in the range of the confidence
interval.
•The last step of this approach of hypothesis testing is to make the substantive interpretation.

The first approach of hypothesis testing is a classical test statistic approach, which computes a test statistic
from the empirical data and then makes a comparison with the critical value.  If the test statistic in this classical
approach is larger than the critical value, then the null hypothesis is rejected. Otherwise, it is accepted.

Regression Analysis to Predict Outcomes


Looking at regression models and hypothesis test, we have been able to quantify the relationship between two
variables. We have also been able to test the accuracy of the independent variable as a good predictor of the
dependent variables response by applying that linear equation to a dataset.

With the inputs that are key sources of variation identified with our regression models now, and by knowing the
correlation between these inputs and the output being verified, we can now focus on establishing that least
squares line in determining where there is variation. Once we can identify those sources of variation, we can
focus our efforts with Lean and Six Sigma to reduce or eliminate those sources of variation. And we can the
simple least squares linear regression model to help model or predict our future outcomes. So we can look at
various values of X based on past performance results and predict that future performance. In addition, we can
use the method of least squares and the different variables within our formula, from our simple least squares
regression, in order to determine our Y value, which is our output that we are trying to predict, and then use our Xs
with our input variables. And then using this information, we can calculate ‘R’, which is our regression coefficient
and our Y-intercept.

Process of Conducting Root Cause Analysis


During the improvement phase of a Six Sigma project, team members use a variety of statistical and non-
statistical methods and lean tools to find underlying issues and ways to address them. Root cause analysis is an
exception tool that allows us to identify and fix the root cause of a problem. Rather than just trying to minimize
the effects of a problem. Root cause analysis requires using a root cause analysis process and applying a set of
root cause analysis tools.

There are three steps in trying to identify and perform root cause analysis. These steps include listing the
possible causes, organizing or grouping these causes, and then prioritizing the list. By using these three key
steps, we can determine the true root cause and actually fix the underlying problem rather than putting a band aid
over the symptoms.

•Listing the possible cause: The first step involves generating a list of possible causes. This is typically the most time
consuming step within the entire root cause analysis because it involves collecting data. In this step we gather the
information that’s relative to the problem. It’s important to note that this step is typically the most time consuming
because we’re collecting data, we can’t rush it. We need to make sure that we’re making data driven decisions. So we
need the data to back up whatever we identify as the potential causes. Some root cause analysis tools we would be using
during this step, as well as the others, include cause and effect diagrams, relational matrices, the 5 why analysis and fault
tree analysis.
•Organizing or Grouping Causes: In the second stage, we focus on organizing and grouping the possible causes listed in
step one. Many of the same tools are also used in this step, including fishbone diagrams and relational matrices. The
primary objective in step two is to identify which causes are linked to each other or influence each other. It could be that
they happen in the same process or by the same people or the same teams. By identifying those common causes or
groupings we can then start to put those into the natural groupings by affinity.
•Prioritizing the List: After we have grouped the causes, now we can move into the third step which is to prioritize the list
of causes. We want to drive it down so we are focusing on those vital few causes. Then we can use the root cause analysis
tools on those causes. It’s important to note that everything that we move forward with needs to be based on data. So we
should be able to backup all of the decisions using solid data.

Fishbone Diagrams
Fishbone Diagram is one of the most competently used tools for root cause analysis, which gets its name from
the shape of the diagram itself. It is also commonly referred to as the cause and effect diagram, because we are
looking for all of the potential causes related to a particular effect. We may also hear it referred to as the
Ishikawa diagram, named after the creator of the diagram. With the cause and effect, or fishbone diagram, we are
trying to identify all the potential causes for one specific effect. Then we use tools, such as a brainstorming
session to generate all the potential causes that could be leading to that effect.

Step to create a fishbone diagram –

•The first step in creating the diagram is to develop a problem or an effect statement and this is where it’s important for
the team to develop this effect, or problem statement, together, to make sure everybody fully understands what we’re
trying to solve.
•The next step is to add the spine that goes out to the left of the diagram. Based on that we will have six or seven main
ribs along the fishbone theme branching off from the central line. From there, we add descriptions of the main causes at
the end of the branches. Typically these descriptions are developed based on the brainstorming session. We group the
different ideas for those causes by affinity, and then we label the causes based on those headings.
•As team develops the different causes in that brainstorming session, they start adding these to the fishbone diagram.
Again, we brainstorm and write down each idea based on how it’s categorized under a main cause.
•In case we need to go further into causes, we again write down detailed causal factors for each of those main factors.
Such that each casual factor is attached to the branch of the main factor that it’s related to.
•Now, when we ensure that all the items are included in the diagram. In case anything is left out the analysis of the
diagram would not be valid, which will lead to an ineffective resolution of the problem.
•Finally, we analyze the diagram. Once the diagram is complete, the team will have a clear picture of what areas are
contributing to the problems.

Illustration of a fishbone diagram


Suppose we are working on a project to improve the retail bank’s loan processing time. So as a team, we have
identified the effect as consumer loan processing delays, and we’re trying to reduce the delays in the process.

Based on what we are trying to solve, we’ve come up with headers for each of the ribs for a fishbone diagram
which include poor quality infrastructure, varying amounts of loan applied for, customers using paper
applications, and loan processing staff’s inadequate skills.

Now exploring the rib for inadequate skills of the staff, some of the causes are a lack of experience, inadequate
education, inadequate training, and lack of motivation or reward. All of these are linked back to the loan
processing staff’s inadequate skill. All of these are potential causes for the effect, which are consumer loan
processing delays.

There are some important points to keep in mind when creating cause and effect diagrams.

•Firstly, it is important to reach agreement about the problem statement. Disagreement about the problem statement may
indicate that the problem is more diverse than originally perceived. Without agreement from all participants, disconnect
will exist between the information we want and the information we get.
•We should also fully explore and break down each cause into other detailed causes, until we get to the root cause.
•Finally, the job is to analyze the diagram. Decide which of the causes are most significant contributors to the problem,
and then devise the own solutions. The diagram itself does not offer solutions.

Relational Matrices
A relational matrix diagram is a commonly used tool for root cause analysis. It’s very useful as a problem solving
tool to help identify the true cause of a problem. The relational matrix diagram shows the relationship between
key process input variables and key process output variables in a matrix format. Sample relational matrix
displays the column headers are the key process output variables, and the row headers are key process input
variables.

Since it is a matrix, it’s very easy to create and use. This tool is used within the analyze and improve phases of Six
Sigma since it helps to identify the relationships between input variables and what we are trying to narrow down
through various tools, such as root causes analysis or design of experiments, and how they relate to the output
variables.

Steps in creating a relational matrix


•The first step is adding input variables to the first column.
•Second, we add the output variables to the first row.
•Third we assign weight to each output variable and assign weight to the effect each input has on its output.
•Fourth, we multiply the output weight by the effect weight.
•Finally, we enter these values in the table and total the results.

It’s important to understand that the steps here are specific to analyze the relationships between inputs and
outputs in a Six Sigma project. But the same relational matrix could be used for understanding the relationship
between any two variables, or two sets of different things.
5 Whys
5 Whys is a common tool used to drill down to the root cause. This tool is considered very useful since it is a
simple method of continuing to ask why until we drill down to that true root cause. The goal is to eventually reach
a cause that’s actionable. 5 whys is typically used in conjunction with other tools such as a Fishbone diagram, or
a Pareto diagram, to help we fully drill down to the root cause.

We start by asking why something happened. For instance, we could ask, the reason for defect to occur, well, it
might have occurred because we had an inconsistent flow of input material. Then we ask why did we have an
inconsistent flow of input material? The reason could be, we had an incorrect measurement. Again, why did we
have an incorrect measurement? The measuring tool gives inconsistent readings. Next, why did we have a
measuring tool that’s giving inconsistent readings? Well it might be that the measuring tool was calibrated
incorrectly. Finally, why was the measuring tool incorrectly calibrated? May be we have no measurement system
in place.

It is a process of continuing to ask why until we drill down to the true root cause, which is something that’s
actionable. In case, if we have no measurement system in place, then we need to put one in place. Steps involved
will be –

•The first step in the 5 whys technique is to select a cause and really ask that initial question. Why does this problem
exist? There are a couple of places we can get the cause from. For instance, we could look back at the cause and effect
diagram. Then based on what was at the head of the Fishbone diagram, we can pull that cause and start asking the 5 whys
to drill down to that root cause.

Another place we can get this information is from the Pareto chart. For instance, if we are assessing the causes
of delays in the receivable cycle, we could use a Pareto chart that reflects data over the past six weeks to identify
reasons for delays in terms of the customers not paying their invoices on time. And so we could use that tallest
bar in the Pareto diagram, and ask why that’s occurring. Then we drill down with the 5 whys based on that highest
occurrence. Once we have that initial why, then we start beginning the why does this occur series of whys. For
instance, if we look at the late invoices, we could start looking at some of the potential causes. The potential
causes could be that customers are losing their invoices or the company is sending invoices late or that the
customers are paying their invoices late.

At the third why, we’re going to start investigating the potential causes. Because at this point we have three
potential causes and we want to make sure that we have data to support any of these before we move forward.
Ideally, we want to narrow the list down to one or two possible causes. So if we think about the customers losing
invoices, we can eliminate this cause because the number of late payments is too large to account for this
problem. That would mean that there would be a high percentage of customers that are losing their invoices. A
second possible cause was that the company was late sending the invoices. And we decide to keep this as a
possible cause because it’s actually supported by data and facts. The third possible cause is that customers are
late paying their invoices. And we’re going to eliminate this possible cause because some of the late payments
are coming from customers who have always paid on time in the past.

Now, we would take this to the next why and we would ask why the company is late sending their invoices. At this
point, we could discover this is happening because there’s a problem with the billing system, there’s user error, or
there are issues with the customer database. As we continue, we ask more of the why questions. And we want to
make sure we’re determining what’s actionable. So if we look at the problem within the billing system, we find that
we have something that is actionable. It turns out that the automated billing system is using legacy software and
it’s been due for an update for many years. Because it’s using old algorithms sets in the software, it’s been
producing invoices that are a few days late rather than sending the invoices immediately once we’ve shipped the
product. So to solve the problem, the team decided to adjust the algorithms as a short term solution. And then as
a long term solution, they’re going to update the entire software. It’s important to note that even though we’re
calling this the 5 whys, we don’t necessarily have to ask why five times. In this example, the team actually
reached an actionable cause after only four rounds of why. On the other hand, we might find that it’s necessary to
continue this process iteratively by asking why six or seven times. The rule of thumb is we stop asking why when
we start repeating the same causes or reasons.

Fault Tree Analysis (FTA)


Fault Tree Analysis or FTA is part of the root cause toolkit used during the Six Sigma analyze and improve phase.
This tool helps us to consider underlying reasons for a specific failure mode or event and also helps us
understand the relationships between them. This is commonly used to search for the causes of an observed or
potential failure. The problems relating to processes, products, service or quality can be addressed and
eliminated. Although Fault Tree Analysis is very useful in manufacturing, that’s not its only application. We can
also use this tool in administrative, service, transaction based and many other contexts.

Value-added and Non-value-added Activities


During the improve phase, there are several Lean concepts, including value added versus non-value added
activities that help to further improve or enhance the process improvement efforts. In this section, we will explore
how to apply the concepts of value added and non-value added activities. This value is always from the
customer’s perspective and what they see is the value of the product or service that we are offering.

Value added refers to features or services that the customer is willing to pay for. For instance, value added could
come from home delivery or ease of use. Alternatively, non-value added activities could be transporting
components to a factory or inspections. Here, it is important to analyze whether non-value added activities are
really necessary. If not, they should be reduced or eliminated from the processes. Consider the case that
however, that not all activities that fail to meet the value-adding criteria should be scrapped. Some non-value
added activities are necessary even if they don’t change a product or service to meet customer preferences, or
ensure the task is completed right the first time. Non-value added activities may be necessary to operate a
business or when we need to meet regulatory or accreditation standards that apply to the organization. These
types of activities are known as required non-value added activities. For instance, writing a production report,
paying wages, or even implementing a Lean initiative are necessary for the efficient running of a business, even
though they add no direct value for the customer.

Illustration
•Let us say we have a worker that’s installing bolts to attach a handle to a product, so they’re fulfilling a customer need
for a handle on the product. The customer then would expect to pay the cost of assembling the product because they want
handles. So this procedure would add value.
•Similarly, if we are entering data from invoices and receipts into an accounting system. This adds value because
customers need to be able to balance their books.
•Installing a spark plug in an automobile engine or cooking a hamburger, because these make physical changes in the item
that helps it become a finished product.

So these activities add value. Inspections, however, do not add value. They do catch errors that would potentially
disappoint the customers. However, customers are rarely aware of these and they don’t know if the product’s
been inspected a dozen times or not at all. The customers just expect a product to work to their satisfaction and
meet their needs as it was promised to them, so inspections do not add value themselves. Similarly, if we have a
program that writes computer instructions for a banker who negotiates the interest on a banking loan. They are
acting in a way that makes the change in service even if the change can’t be seen.

Six Sigma focuses on eliminating or preventing muda, which means waste in Japanese. The goal is to eliminate
and prevent muda to further improve the process. We want to understand where we have waste within the
process and reduce the number of non-value added activities.  A diagram depicting various activities displays. It
includes operator’s activity, operations, value-added activity, non-value-added activity, and Muda, especially those
not required at each step within the process.

Process of Eliminating Waste


In Lean Six Sigma projects, identifying and removing waste are very useful at the improve stage in the DMAIC
cycle. In general, Six Sigma defines seven ways to reduce or eliminate.


Overproduction: The first waste is overproduction, which is when we are producing more products than is needed.

Extra Processing: The next waste is extra processing, where we’re doing extra steps in a process or to the product that
the customer’s not willing to pay for.

Motion: The next waste is motion, specifically excessive motion, in terms of the people or the operators who are
involved within the process, whether this is reaching over too far or having to walk too far to get what they need.

Waiting: The next waste is waiting, and this could be people waiting on equipment, equipment waiting on people, or
waiting for a step in the process. For example, waiting for someone to sign a form to get an approval.

Transportation: The next waste is transportation, such as moving a product back and forth multiple times.

Inventory: The sixth waste is inventory, which is holding on to too much product in the inventory that might be damaged
or for a customer who may change their mind and not want it anymore.

Defects: Then the last waste is defects. Defects are any result that does not meet the customer’s specification. Defects
usually result in rework or scrap.
In addition to the seven wastes, there’s an eighth waste considered by many, and that is the non-utilized or
underutilized skills and talents of employees. This eighth waste is added because people need to understand the
importance of the employees and how much they add to the process. By not using their time, their talents, and
their skills, we’re losing value, so this is another form of waste. All eight of these types of waste form the acronym
DOWNTIME, defects, overproduction, waiting, non-utilized talent, transportation, inventory, motion, and extra
processing.

There are some strategies we can use for reducing the impact of waste.

•We can eliminate overproduction and inventory waste by changing from a push system to a pull system. This would help
to ensure that we purchase or produce only what the customer or the next process requires.
•Another way is to implement poka-yoke, and poka-yoke devices are mistake-proofing mechanisms that can be used to
reduce the waste of overproduction and excessive inventory.
•Another strategy could be to reduce the batch sizes, or eliminate batch production by going to single piece flow
production. This helps to keep either the batch sizes down or the batch sizes at one, so we can level out the sales demand.
•One way to help this is to reduce the setup and changeover times, and that helps to drive down the smaller batch sizes
that make it more economical.
•Another strategy is to focus on improving communication. When we ensure communication flow is between facilities
and suppliers, and then we are really able to better put together a pull system, in which each component or item is
supplied only when it’s needed.

5S Methodology
During the improve phase of Six Sigma, there are four key tools used to eliminate waste, including 5S, poka-yoke,
standard work, and kanban or pull. In this section, we’ll explore 5S. The 5Ss are based off of five Japanese words,
and in English they translate to sort, straighten, shine, standardize, and sustain.

They represent a circle, an iterative process of continuous improvement. It is crucial in terms of quality
improvement, because we want to have a workplace that’s clean and organized and has structure. It’s much more
difficult to make improvements within the process if it’s disorganized, cluttered, or neglected. This is where 5S
provides an excellent methodology for making sure that we only have what we need when it’s needed. And it’s in
the order that we need it, when we need it. So we start out the process with sort and straighten. With sort, we’re
determining what’s needed versus what’s not needed such that we are discarding or removing what’s not needed
when we’re operating that step of the process. Once we know what we need and what we don’t need, we only
have what’s needed within the process. Then we organize or straighten what we actually need. So if we think
about a manufacturing line, we would want to only have the tools that we need for the product that we are
running and anything else that’s not needed in that step, in terms of components or supplies or equipment, would
be removed from the process.

Subsequently what’s left needs to be organized in the order it would be used. The next step in the process is to
scrub and standardize. That means we only have the tools and the equipment and the components that we need
for the process that we’re running. At this point, they would be laid out in the order that we would need them. And
the third S, scrub, means we clean everything so that we have a nice clean environment where we can operate
and we can clearly see what’s needed and what’s not needed and if anything is missing. This could involve wiping
down machines or setting up procedures so that we can make sure that we have a cleaning routine. This would
be the next S, standardize. We could set up the processes where, based on what we’re operating in the line or
what the role is today, we would have the specific things we need to go and clean up at the end of the day or at
the end of the shift. This would provide a standard operating procedure and then the final step in the process is to
sustain. Now when we talk about sustain, we want to make this part of the culture of the organization. So now we
build off getting everything organized and clean, and we have the work instructions and procedures to have a
standard in place. We want to make sure this is part of the daily organization and is part of the ingrained culture
of the organization. We can then set up things to prevent things from happening. We want to prevent things from
going back to the old way of doing things.
Process of Implementing 5S
In this section, we will explore the process to implement 5S by working through an example.

STEP 1 – SORT
The first step with 5S is to sort, and with sort we start by red tagging the unneeded items. The key aspect in the
sort phase is to understand what’s needed and what’s not needed. When an item is declared as unneeded for that
process, a red tag is placed on it, which gives it an identifier. Then each of those items is moved outside of that
main process area to a designated area. Typically these items are held for a week just to make sure that they’re
not actually needed. After a week, we have to deal with whatever is left and those items could be disposed off,
donated, or relocated. And it’s important to note that the goal is not to throw away as much as possible but to
determine what we actually need and what we don’t need, which is why we can hold on to those items for a week
just to make sure there isn’t a special instance where we might need something.

STEP 2 – STRAIGHTEN
The second step in the process is to straighten. With this step of straightening, we are trying to organize things so
that they are arranged in the order that they are going to be used. This helps to maximize comfort and efficiency.
We want tools and equipment to be within reach and the raw materials placed close by so that we can get to
them easier. This helps to eliminate any fumbling or hesitating so that we have exactly what we need when we
need it. Let us examine how this would work in a manufacturing environment. Suppose we have a line supervisor
that’s implementing 5S in a sewing division of a garment manufacturing company.

So the supervisor’s already talked to the employees about sorting in the area and they removed all of the
unnecessary items. The supervisor works with the employees to arrange their workstations and those items
within their workstations in order to maximize comfort and efficiency. For example, the right-handed employees
would have their scissors hooked within reach of their right hands. And the pieces that are awaiting work would
be kept on the left-hand side of the sewing machines. That helps them to pick up their fabric and sew or cut
threads without fumbling or hesitating. So they have everything right where they need it when they need it. In
addition, they would have the pieces awaiting processing stacked on a common set of shelves marked and
labeled as to where they belong. That way the employee who prepares the fabric upstream stacks them whenever
the designated spaces on the shelf are empty.

THIRD STEP – SCRUB OR SHINE


The next step in the process is to scrub, which is making sure that the workspace is clean and neat and that any
machines are clean or free of debris. This could be wiping down machinery or repainting machinery. For daily
maintenance, at the end of the day, we would want to make sure that the floors are clean and the tools are clean
and hung back where they’re supposed to go, and it also involves regular machine maintenance. In the sewing
example, we would want to have procedures set up where employees sweep their sewing area and clean their
machine of debris after working on each piece. That way, at the end of the day, they just need to hang up their
tools, tidy up their area, sweep the floor, and then wipe everything down. We can also set up a schedule where
they are oiling their sewing machine on a weekly basis.

STEP FOUR & FIVE – STANDARDIZE & SUSTAIN


In standardize and sustain phases, we want to make sure that we continue to communicate the importance of
using 5S. Within these phases, we would establish procedures for all the employees to follow. Now in addition to
just establishing procedures, we also need to make sure that we’re providing time and the necessary tools to
make sure that employees can do this. Finally, we want to make sure that we’re rewarding the behavior so that we
can promote it. The supervisor would also make sure to set aside time at the end of every day for regular cleaning
and tidying up of the areas. And the supervisor would also work with employees to set up protocols for regular
sorting, setting in order, and shining of workstations. To sustain, we would want to set up a procedure where
we’re educating all of the employees through posters and other communication strategies, and we could set up
monthly prizes where we’re awarding cleanliness.

Poka-yoke
Poka-yoke, the Japanese word for mistake proofing is one of the key ways to reduce waste in a process is
through the use of. Mistake proofing involves examining the processes to find places where human errors could
occur and finding ways to prevent those errors from occurring. For instance, we have connectors or plugs that fit
in specific shapes. There are other poka-yoke’s in everyday life, such as a microwave oven. It doesn’t run unless
the door is shut. We have spell check on the computer, so it gives us a signal when something is spelled
incorrectly. We could design parts so they can’t be used incorrectly, and we could also have fail-safe cutoff
mechanisms when we have risky procedures.

Four key types of poka-yoke devices



Checklist: The first is a checklist, essentially a list of actions items or steps that must be accomplished and checked off as
they are completed or accounted for. So it provides a reminder as we perform specific action items.

Screening Device: The second type of poka-yoke device is a screening device. Screening devices are design features that
prevent processes, equipment, or tools, from being used incorrectly. And so the key aspect of the screening device is that
they help prevent incorrect use. The options can even be selected based on what’s given to us in the options itself. And
what’s nice is that the screening devices remove the need to correct mistakes, because we can’t even make a mistake in
the first place.

Signaling Device: The third type of device is a signaling device. Signaling devices note when something is out of order
or they’re not conforming. Signaling doesn’t stop the production process, but it gives we a visual signal to let we know
that something has happened. For example, we could have a warning light that goes off to a machine operator to alert
them to a problem or a beeping sound to signal that the step is due to be performed.

Control Device: The fourth type of poka-yoke device is a control device, which is when the devices actually shut down
when the error occurs. It’s the most effective type of poka-yoke device, because the process cannot be completed until the
error or the defect is corrected. An example of this would be a production line that shuts down as soon as a defective part
is detected. So let’s go through an example of how this applies, given a manufacturing example with a screening poka-
yoke. If we have a jacket that requires ten buttons, rather than relying on the operator to always count the number and to
make sure they’ve got ten buttons. As part of their standard materials, it’s useful to make sure that we already packaged
them ahead of time, so that the ten parts are the standard materials. That way, if any buttons remains after completing the
sewing, the operator can tell that a defect has occurred.
Standard Work
Standard work is a useful method that helps reduce waste within the processes. It is a method to document the
best practices between operator and machine, to make sure that we are running in the most efficient and
standard way. Standard work is based on agreed upon procedures and best practices. They’re built from the
bottom up, so everyone is involved in creating the standard work guidelines and they’re also involved in improving
them. In this way, the operators or the people that are directly involved with the operations are involved in the
process. The goal of utilizing standard work is to maximize the performance and also minimize the waste. We
also want to make sure that the standard work is operating based on the planned takt time. So we are making
sure that we’re efficient in terms of what we’re producing based on what the customer demand is. Standard work
is built upon by the people that are actually performing the work. This helps to make sure that we have the right
documentation of the actual work processes, the action sequence, the quality and safety checks, and any other
important work information. There are several key documents in the standard work package – first one is a
standard worksheet. The purpose of the standard work worksheet is to provide information on the overall flow of
the process that makes sure there is a high level documentation that shows the process itself. To find how
efficient it is and where there might be waste with the process.

The three key aspects of the worksheet are,

•The first aspect is the work sequence that allows us to look through each step of the process and how the actual work
flows through each of those machines throughout the production line.
•The second aspect is the cycle time that’s provided. Based on what the actual production is capable of, we can document
the overall cycle time. Then compare that to the takt time to see if we’re capable of meeting the customers’ demands.
•The third, aspect is the standard worksheet provides information on the standard inventory and where those are located
within the process.

Other key aspects that are involved in this standard worksheet are the quality checks, which are denoted by
diamonds and safety checks denoted by pluses. These are noted at each step of the process where they occur.
The second type of documentation in the standard work package is the combination sheet.  A sample
combination sheet displays. It includes the information about the process name, description of operation, time
elements, and cumulative operation time (seconds). A combination sheet is useful because it goes through the
sequencing of the operation, provides a description of each operation, and then breaks down the time elements
for each of those steps based on manual, automatic or machine time, and the walk time. This is documented to
capture at the manual time, the auto time, and then any walking time to the next step in the process. This
information can be compared to the takt time to see how we can adjust the process to make sure it’s the most
efficient. The next set of documentation that’s useful in standard work is the capacity chart and the standard
operating sheet. Capacity charts are used to chart the capacity of the machines that are involved within the
process.

Each machine number is documented and its process name. Then any manual or automatic time is documented,
so we can look at the relationship between the manual and the machine time. And then document the time it
takes to complete. Based on that, we know what the operating time is per day. We can look at the capacity and
use this information to document or determine where any bottlenecks might be. The standard operating sheet is
used to document the interaction of the employee and the operator with the equipment and tools. And
communicate the standard timing and procedures. With the standard operating sheet, each activity is
documented and includes the standardized work description. We can use this information then to also document
information such as the process, the order of operations that are given, and any safety and quality checks. This
gives us information on how to better improve the process. It’s important that we make sure we have the
appropriate training materials and documentation available because this operates as the instructions to the
operators and how they would implement the process. So, task and training materials must include detailed
policies and procedures. In addition to the policies and the procedures, it’s useful to have information and how
these policies and procedures, as well as any safety and quality checks. So we have an easy reference to
document for the operators. Then, finally, to make sure that we have information on cycle times, we can use this
information then to compare it to the takt time to see if we are capable of meeting the customers’ expectations.
And if not, we can use this information from the training manuals to further drive the process improvement
efforts.

Kanban and Pull


Kanban and pull are two Lean concepts that also help to reduce waste as they are very closely related. Pull is how
we pull material through the processes. The concept behind pull is that we only produce product when we have a
demand from the customer. When a customer orders a product, then we start pulling material through the
process. So in this sense, the production is relying on the pull, or the customer demand, rather than on the market
forecast. If we focus on market forecast, we might build ahead and end up building product that the customer
never orders. That’s a form of waste, because we might end up having to scrap that product if there’s never a
customer order for it. Or while we’re waiting for a customer order, we’re holding it in inventory. And we have
associated inventory carrying costs, and potential for the product to be damaged while we’re moving it around.
Pull focuses on the just-in-time philosophy, so there’s no wait time throughout the process, because we are
pulling the material through it. If we think about a chain link, the signal to pull that chain would be from the
customer. As the customer pulls that chain link, then we start pulling products through the processes. By doing
this, we are reducing the stock on hand. We want to make sure that we are holding minimal amounts of inventory.

Aligned with the concept of pull is the concept of kanban. Kanban is the Japanese word for signal. Ideally, we
want to use that customer pull as the signal that we need to produce product. The signal tells the operators when
they need to pull materials into the stream. The advantage of using this methodology is that we are avoiding
overproduction because we’re regulating how much material is moving through the process at any time. In
addition, this step helps with the overall quality because we’re not holding on to products in various queues. We
don’t have the issue of potentially having a defect that might be lost in the inventory somewhere. Since we are
reducing the inventory and we’re reducing the batches, we have better control of the inventory. The disadvantage,
though, is because we’re pulling this just in time, we don’t have as much of a provision for late deliveries of
materials. Kanban and pull blend together. When we have an order from the customer, we’re pulling material back
through the process, through the product flow. And it’s signaling to production that we need to make parts based
on how many the customer ordered. This, in turn, signals a kanban pull back to the supplier that indicates we
need more parts, more supplies. By sending that through, we’re moving the material forward only when we have
that customer need. Let’s explore an example of a kanban pull. Let’s assume we have a workstation that is
communicating with a predecessor workstation. And they’re signaling that they have a need for more material.
They’re doing this as soon as they’ve completed their work. And in that way, kanban is a signal that says, I need to
produce more. And I’m pulling that material for my supplier or my predecessor process. So I’m pulling it back
through my process rather than pushing it. There are both opportunities and challenges for kanban pull. The
biggest advantages for the kanban pull system is that it helps to reduce inventory, work in process, cycle time,
turnaround time, and machine downtime. It also helps to really increase the visibility of quality issues, because
we don’t have products sitting in inventory at various spots within the process.

The challenges are, however, are that for kanban pull to really work, the material needs to flow at a steady rate in
a fixed path. If we have large variations in the product or the volume, that mix interrupts the flow and it can
undermine the system’s performance. So it may not work well in processes where we have large variation within
the products. The other obstacle is when we have unexpected variation in the market needs. These fluctuations
hinder the pull system because it causes production to start only when there is a demand for a product. So, we
need to make sure that as a Lean organization, that we can be flexible and real responsive to changes in the
customer demand. Kanban cards and containers are a tool within kanban pull. They can be physical or
electronic.  With a physical kanban card, when we use up the material within a container, we have a kanban card
that typically goes back to some sort of a board. The board collects information, so that we know when we’ve
reached a certain level of kanban cards, we need to produce that product again. Electronic kanban cards capture
the information such as the time, the minutes, the seconds, and how many products are used every hour and
every day. Electronic signals are being sent back to tell the predecessor operation what parts and what mix of
parts need to be produced.

Kanban-Pull Process
In this section, we will explore the Kanban pull process.

•The first step in Kanban pull is calculating safety stock, which is the minimum or safety level of stock required in the
process. Essentially, safety stocks are excess inventory. This is material, whether it’s finished product or goods from the
suppliers that we’re holding on hand to make sure we have enough in place to run the business. We’re doing this because
we might have variations within the demand. We want to make sure that we’re accommodating those variations. But at
the same time, we also want to minimize how much inventory we’re holding on hand. And have just enough to make sure
if we have a peak in the demand that we’re able to handle that. The more unpredictable the demand is, the larger the
safety stock.
•The second step in the Kanban pull implementation is to calculate the lead time to replenish the materials. There are four
key factors that affect the anticipated lead time. The time it takes to place an order. The time it takes to process that order.
The transport time for those materials. And then, the time to receive, unpack, and prepare those materials. Each of those
four factors goes into the overall lead time. And we need to be able to capture that overall lead time to see how much
safety stock we need to hold on hand.
•The third step in the Kanban pull implementation process is to determine the batch size. To determine the batch size,
what we’re looking for is how much product we need to produce in one continuous work process. In other words, this is
how many pieces of that product we need to produce at once before we change over. And this is used to determine the
stock level and the lead time.
•The fourth step in the process is to check and adjust the levels of requested material. So we need to take into account the
current stock, the lead time, the batch size, and then adjust these as needed.

Illustration: Let’s assume we’re working for a manufacturer of small electronics, and the company is introducing
a new media player. The goal is to have this released by the holiday season. As a Six Sigma team, we’re working
at the company’s manufacturing plant to set up the Kanban pull system.
•The first step of the process is to calculate the safety stock level of the materials requested. So the team determines that it
needs a safety stock of one week’s worth of production materials to meet variation outside of expected demand.
•The second step of the process is to calculate the lead time for the replenishment of the materials. The team calculates
that the manufacture and transport time of the components from the supplier to the target workstation is 4 days.
•The third step is to determine the batch size. In order to meet advanced orders, the team determines that a batch size of
200,000 units will satisfy the customer demand.
•Then finally, in the fourth step, we check and adjust the levels. So as a team, we do final calculations to check and adjust
the levels of materials needed in light of those first three steps.

These final levels determine the appropriate placement of the pull cards within the process.

We can define Design of Experiments (DOE) as a means to make systematic intentional changes to input
variables. In order to measure the impact we consider the output variables, and the dependent variables. Within
Six Sigma Sigma processes we typically have multiple factors identified that could potentially impact the output.
The objective of DOE is to reduce the number of input variables and thereby identify which variable really has the
most impact on the process. Subsequently, by using the reduced set of variables, we can optimize the process
and thereby determine what those key settings should be to get that desired output.

DOE is typically used in the Analyze, and into the Improve phases of the DMAIC methodology.

In the Analyze phase, we identify some key factors. At this stage, there may be multiple factors that may be
believed to impact the output of the process. Thereafter the information is taken into the Improve phase, where
we can use the design of experiments.

We can then determine the factors that requires focus, and how much. This information can then be used to
improve the process. We can then analyze those key input variables to determine which factors are important
and the settings needed. Based on the key inputs and factors analyzed, we can generate solution ideas using the
best combination of input variable settings to optimize a response. Once we have the setting in place, then we
can test, implement, and validate the process for improvements.

Types of Experiments in DOE methodology



Screening Experiment: In this stage of the Six Sigma project, we have multiple input variables. Screening experiments
are primarily used to determine which of those input variables, or factors, really affect the output or response. Therefore,
the goal of the screening experiment is to reduce the number of factors involved in the process and take it down to those
key factors that really make a difference. In general, we would we to get down to the probable factors of a maximum of
two or three that are strongly related to the response. Screening experiments helps to make sure that the design of
experiments is manageable and more cost effective.

Optimization Experiment: The second type of experiment under DEO methodology is the optimization experiment.
Now the purpose of the optimization experiment is to optimize the factors that are left to ensure that we reach the desired
output. We then adjust those different factor levels that were identified during the screening test.

Robustness Experiment: The third type of experiment under DOE is a robustness experiment, also known as
confirmation experiments. This robustness experiment is mainly used to ensure that the redesign process is robust. In
other words, make sure that it is stable. Also we are would be able to sustain those gains and have the correct output that
we are searching for.
Types of Experimental Designs
There are primarily 3 types of experimental design under Design of Experiment (DOE)


One-Factor-At-a-Time (OFAT): The first type of experiment design is the OFAT, which stands for one-factor-at-a-time.

Full Factorial: The next type of experimental design is full factorial.

Fractional Factorial: With the fractional factorial, we use a reduced set. This means we essentially take that full factorial
and cutting it in half.
Design of Experiments (DOE) techniques enables designers to determine simultaneously the individual and
interactive effects of many factors that could affect the output results in any design. DOE also provides a full
insight of interaction between design elements; therefore, it helps turn any standard design into a robust one.
Simply put, DOE helps to pin point the sensitive parts and sensitive areas in designs that cause problems in Yield.
Designers are then able to fix these problems and produce robust and higher yield designs prior going into
production.

DOE offers various benefits which include,

•We are able to test multiple factors at once.


•This method is very efficient and cost-effective, versus testing each individual factor, one at a time.
•Allows we to test many factors and evaluate those simultaneously.
•Simultaneously evaluating the factors, helps in understanding and quantifying the interactions that might be occurring
between various factors and their impact on the response.
•It helps to distinguish the importance of these factors against each other.
•DOE is also very useful since it enable to analyze a large amount of data and construct prediction models based on this
data by using multiple factors that are being evaluated simultaneously.

Key Elements of Design of Experiment


There are several elements within design of experiments that we should be acquainted with. While working on
process improvement in Six Sigma, we have some process elements, key input variables, or Xs that are input into
the processes. In which case we would try to measure the effect on the response or output variable. The role of
design of experiments comes in play since we are looking at factors, input variable (Xs) and response, the output
variable (Y) value.

Process of conducting DOE


The process of conducting Design of Experiment is –

1.Identify the objective of using experiments. Typically, this may be one of –


•Finding true causes of problems.
•Finding how causes interact.
•Finding the best solution to a problem.
•Testing a solution to ensure it has no undesirable side-effects.
2.Define what is to be measured to show the results of the experiment. It will make the experiment much easier if this is a
single measurement that can be easily and accurately performed. Be clear about other factors, such as when, how and by
whom the measurement will be made.
3.Identify the factors that are to be controlled during the experiment. Consider all things that may affect the measured result,
then select those that are to be varied and those that are to be held steady or otherwise monitored. In this case, any
measurements should be clearly defined. Some of the examples of factors include price, dimensions, temperature, errors,
brands of fertilizer, age ranges. When there are many factors, reduce the list of those that are to be varied by selecting those
known to affect the result and those whose effects are uncertain. It might also be appropriate to perform a series of
experiments, starting with a smaller subset of ‘high-probability’ factors.
4.For each factor selected in step 3, identify the set of levels that the experiment must consider. This will typically be a small
set of possible values such as: 20, 24 and 28; ‘GrowFast’ and ‘EasyGrow’; present and absent. There will be fewer trials to
perform and the subsequent analysis will be easier if very few levels of each factor are selected. Two levels are sufficient for
many cases, as this will show whether changing the factor level changes the experimental result. Three or more levels can be
used to check for a non-linear response. Select the levels to be representative of the range of normal values. Thus they should
be sufficiently separated to be able to identify changes over the normal operating range, but not so spread as to meet any
boundary effects, such as where a liquid boils. Ensure the factors can be controlled and measured at these levels. If they
cannot be controlled, then it may be sufficient to measure them and sort them into ranges.

5.  Select the actual trials to take place. There are


a number of possible ways of doing this, depending on the type of experiment being performed. Some simple
methods are described in the section on practical variations, below. The decision on how many trials to perform
may include economic factors, such as time, effort and cost. For example, crash-testing of vehicles is expensive
and time consuming and is impractical to do too often. When trials are selected, check that they are balanced,
with the different levels of each factor occurring the same number of times. Also check for orthogonality, with
each pair of factors having each combination of levels occurring the same number of times (as illustrated
above).

6.  Perform the trials as planned. This may be a simple set of experiments or may require significant organization
and funding. In any case, be careful in controlling factors at the selected levels and in measuring and recording
results. Consecutive trials should not have any chance of affecting one another; if this could happen, perform
trials in random order. Results may be recorded in a simple table, such as illustrated, which shows one trial per
row, with levels and results on the same row. This will help analysis, as results may be visually correlated with the
selected factor levels.

7.  Analyze the results. A simple approach is to average and plot results for each factor, level and combination, as
illustrated. More complex methods are given in the references. Where there are more than two levels, this will
result in lines through more than two points. If the lines are not straight, then this indicates a complex effect.

8.  Act on the results. This will depend on the objectives from step 1, and thus may be one of

•Eliminating what is now a known cause.


•Selecting the most effective solution to a problem.
•Acting to remove undesirable side-effects.

Illustration
Create a design matrix for the factors being investigated. The design matrix will show all possible combinations
of high and low levels for each input factor. These high and low levels can be generically coded as +1 and -1. For
example, a 2 factor experiment will require 4 experimental runs.
Note: The required number of experimental runs can be calculated using the formula 2n where n is the number of
factors.
For each input, determine the extreme but realistic high and low levels we wish to investigate. In some cases the
extreme levels may be beyond what is currently in use. The extreme levels selected should be realistic, not
absurd. Enter the factors and levels for the experiment into the design matrix. Perform each experiment and
record the results. For example, consider the following table

Now, calculate the effect of a factor by averaging the data collected at the low level and subtracting it from the
average of the data collected at the high level. For example:

Effect of Temperature on strength = (51 + 57)/2 – (21 + 42)/2 = 22.5 lbs

Effect of Pressure on strength = (42 + 57)/2 – (21 + 51)/2 = 13.5 lbs

The interaction between two factors can be calculated in the same fashion. First, the design matrix must be
amended to show the high and low levels of the interaction. The levels are calculated by multiplying the coded
levels for the input factors acting in the interaction. For example:

Calculate the effect of the interaction as before. Effect of the interaction on strength: (21 + 57)/2 – (42 + 51)/2 =
-7.5 lbs

The experimental data can be plotted in a 3D Bar Chart.


The effect of each factor can be plotted in a Pareto Chart.

The negative effect of the interaction is most easily seen when the pressure is set to 50 psi and Temperature is
set to 100 degrees. Keeping the temperature at 200 degrees will avoid the negative effect of the interaction and
help ensure a strong glue bond.

Experimental Error
One of the key challenges in the Design of Experiments (DOE) is experimental error. Experimental errors are observed
variations in experimental results. There are mainly two types of experimental error – systematic errors and random errors.

Systematic Errors: Systematic errors generally come from the measurement instruments themselves. They could be
caused because there’s something wrong with the measurement instrument itself, or how the data is being handled or
whether the instrument is being used incorrectly by the experimenter. What is important to note her is that the systematic
error occurs when the same error is evident every time an experiment is run. These are the types of errors that we would
see often. Now this means that the design of experiment has inherent error within it.

Random Errors: On the other hand random errors are unknown and are caused by unpredictable changes in the
experiment. These could also happen by measurement instruments, but is typically more from environmental conditions.
For example, if we are using an electronic instrument, there could be electronic noise within the instrument or there could
be heating and cooling changes within a thermal change chamber or a heat treat oven.

Bias – Bias is defined as a type of systematic error. Bias is equal to the difference between the known values. So this is
where we will have a calibrated standard versus what the actual observation. Bias should be a standard difference over time.
Bias is also inversely proportional to accuracy. Some of the examples of bias involve measuring instrument that’s measuring
a part. Over time, the weight of the part is 1.446 grams but we know from the true standard it is 1.346, so the bias is 0.1
grams. One more example of bias is scientists unintentionally selecting promising participants for an experimental group.
This causes the test results to be skewed. Or we could have a majority of patients who may intentionally sign up for a clinical
trial, because their prognosis is bad. This leads to making the drug being tested appear less effective than it actually is.

How to control random error?


We can control random error by taking the average of those measurements. There are several other factors or causes of errors
that are uncontrollable, including noise factors and lurking variables. When we go about opening a pizza oven and how it
affects cooking times. Every time we open the door, the oven cools down slightly. This is an example of a noise factor. An
example of a lurking variable might be an incorrectly calibrated scale. Uncontrollable causes of error impact the experiment
by inducing variation and skewing the results. And so we need to make sure that we understand what those causes of error
could potentially be, as we set up the design of experiments.

Balanced Design
With the setting up of Design of experiments one of the key principles for an effective experimental design is to ensure to
have a balanced design. A balanced design means that all of the treatment combinations have the same number of
observations. We can set this up by starting to define the levels, the number of factors and then calculate the runs required.
For instance, if we had two factors and two levels then we would have four runs and then set up to have a balanced design.
This means that each one shows up yet in this case, each letter shows up once in each row and it also shows up once in each
column.

Randomization
The Design of experiments or DOE is one way to decrease the impact of error in uncontrollable factors. It is majorly used for
randomization. Randomization can be defined as a way to organize the order in which we are performing the runs. What
needs to be done is essentially taking an ordered sequence of the factor combinations. We look at each of the possible factor
combinations and then randomize that order. Thereafter we can change the running order of the experiments. The only goal
of randomization is to organize the experiment in such a way that the treatment combinations are performed in a random
manner that improves the statistical validity to ensure that the team is more secure. In the conclusion based o design of
experiments helps the process of randomization to reduce those conditions that create bias. For instance when we start the
design of experiments, there could be a learning curve whether it’s with using gauges or how it is actually recording the
information. It might cause some of the errors in the measurements. In addition, there might be some noise variables that are
impacting the system.
For instance, if we begin the process of design of experiments at the beginning of a shift then the machinery or the
equipment might have to warm up. This might have some impact in the results. Therefore by randomizing the experiment it
would balance those noise effects across all of the different experiments. For instance when we think about a process where
we are casting metal parts changing a factor such as the heat or pouring temperature of the casting would be very difficult.
Typically, when we are talking about hundreds of gallons of melted metal, so raising the temperature up and then dropping
the temperature down very quickly is not really feasible. So we would want to look at the run order and would want to set it
up, so that we are minimizing the impact on the team.

Blocking
Under Design of Experiments (DoE) one of the ways to manage variation that is caused by non-experimental factors is
known as blocking. Blocking can be defined as method of setting up blocks for non-experimental factors, so that we can take
those factors into account and then assign the ones to those blocks. We can organize them in such a way that the study is
more uniformed than the population as a whole. For instance, we can use blocking while conducting an experiment around
the amount of individual TV viewing in a neighborhood. In which case, so we could consider each household as a block, as
opposed to all households in the neighborhood, or all of the individual people. By performing such tasks helps in achieving
greater precision by comparing the units within a block, since the main differences between the blocks are assessed
separately. We would then need to take care of the randomization run order within the blocks to help avoid experimental
error. Also blocking can be very useful in manufacturing.
For example, let’s say there are different lots of material being used or introduced within a process. Introducing a new lot
during a DoE would introduce additional variation. So blocking can be useful for controlling this variation. Alternatively, in
the service sector, we may use blocking to manage different things happening based on different shifts. So we’d want to take
that into account. Randomization and blocking are used together to deal with variability by assigning treatments within
blocks randomly. With randomization, we can randomize the units in a sample, the order in which samples are tested and the
order of the runs. Blocking helps reduce non experimental factors.

Replication and Repetition


Replication and repetition are two principles for effective, good experimental design.
• Repetition simply means repeating the same design but over longer periods of time. For instance, if we have two factors,
bill amount and experience of staff. Then the response is bill processing time, in this case we are running an experiment
with two factors at two levels, and that means we need four runs.
• Replicates are very useful in case we wish to increase the precision of the output, as we are able to see through the change
and the variation induced over time. This really helps with being able to detect different defects over time. Rather than
running the experiment just once, we are able to see the experiment multiple times and see the impact over time. Since we
are running the complete experiment again, if we have a large number of factors, it’s very difficult to run that experiment
again. Since it might be too timely and costly and so the number of replicates is very dependent on the cost and the time it
would take to do that.
Repetition differs from replication since we are repeating the same experiment immediately after. For instance, in an
experiment with three factors and two levels, we repeat the same treatment twice before moving on to the next. There could
be more than two factors and levels in an experiment, and we could have repeated each treatment more than twice.

Differences between Replication and Repetition

Replication Repetition
We are repeating the same measurements taken
We are taking the measurements during identical
during the same run, and then entering the date
but very distinct runs.
across rows.
The data is entered then down a single column There is no passage of time between the repeats
There is time that passes between the first run and We won’t catch as much variation as we could
the second run, we’re able to capture that because we couldn’t notice the changes in
variation. variation

Full and Fractional Factorial Designs


When we begin the Design of Experiments (DoE) it becomes crucial to select the design for the experimentation. In order to
do this effectively, we need to understand the difference between a full and a fractional factorial, and the implications of
choosing either.
• Full factorial designs are mostly used when we have a small number of factors and levels, resulting in a small number of
runs or in case there are more factors, and cost and time are not an issue.
• Fractional factorial designs are typically used when the number of runs is large due to a larger number of factors and
levels. Such that we would still able to extract valuable information but with fewer runs. In order to calculate the number
of required rounds in a full factorial design, we use the formula
Number of required rounds in a full factorial design = Lk
Where L is the number of levels and K is the number of factors.
For instance, if we have 2 factors and 2 levels, that’s 2 to the power of 2 (22) or 4 runs. In case we have 3 factors and 2
levels, then number of required wounds would be 2 to the power of 3 (23) or 8 runs. As we set this up as a full factorial
design, it becomes important that we consider the number of runs in the experiment and how this affects the budget and the
time constraints.
If we have a large number of factors and their levels because of the time and cost restraints, this is where Six Sigma teams
might consider conducting a fractional factorial design, instead of a full factorial experiment. In fractional factorial designs,
the focus is on the main effects and only a limited number of interactions that are of interest. Typically, interactions of more
than three factors are not considered significant. So rather than testing every combination, we test a subset of combinations.
The number of runs for fractional factorial is calculated by L to the power of k minus p i.e.,
Number of runs for fractional factorial = Lk-p
For instance we have 4 factors in 2 levels. For a full factorial design, that would mean it’s 2 the power of 4-0 or 16 runs.
With a fractional factorial, the p value represents the number of factorial reductions. So if we’re performing 1 factorial
reduction then we are calculating 2 to the power of 4 -1, or 2 to the power of 3, which is 8 runs. And essentially, we’re
cutting the number of runs in half from the initial 16. Now in case we do a fractional factorial, the p is now 2. So we may
calculate 2 to the power of (4 – 2), or 2 to the power of 2, which is 4 runs. Now we are doing a quarter fractional factorial
design and we went from 16 runs down to 4 runs. This is where, as a company, we can save time and money.

Main Effects and Interaction Effects


In design of experiments, we have two types of effects – main effects and interaction effects.
•Main effects are the key factors that have been identified and might impact the output.
•Interaction effects, or interactions, are the interactions between two factors, and the interaction is part of variation. It’s a
variation among the differences between the means for different levels of one factor over different levels of another factor.

For instance, if we are examining a process, we might think that the temperature and time are related to each
other. So we would want to calculate the effect of the interaction of those two factors on the output.

There are several reasons why it’s important to calculate effects.

•When we are calculating effects, what we’re really looking for is which of the factors has the biggest impact on a
response variable. This is a way of producing the number of factors to determine which of those factors really has an
impact on the output.
•By calculating effects, we are also able to determine the nature of the factor level combination. Because once we’ve
determined that a factor is important, now we need to look at the levels of those factors to see what the appropriate
combination is to make sure we’re getting that desired output.
•If we were to look at this one factor at a time, it might hide the importance or the impact that a factor has on the
response. By using a design of experiments, we want to understand what those effects are between the factors.

Now it is important to understand how did those factors work together to create that impact on the response?

For this we can plot main effects graphically to identify the impact of each factor on the output. So in general, we
are looking at the impact of factor A and factor B and so on. When we plot the main effects, we are looking at the
main effects based on the low setting and the high setting.

•When we change a factor from the high setting and the low setting, if we have a horizontal line that means that the
change in levels has no effect on the output.
•When we change from the low setting to the high setting and we have a difference in the slope that means we have some
sort of effect on the output.

We are really looking at is the slope of the angle that defines the main effect. This means the steeper the slope,
the greater the main effect. For instance, if we change from the low setting to the high setting and we have a very
large difference in the slope that indicates we have a large effect. So when we have large effects, these are
factors that we can change to impact the output.

Cycle Time and Takt Time


In the improve phase of the DMAIC methodology, the focus is on improving overall lead time. Here, it is very crucial to
understand the differences between cycle time and lead time, and how these relate to process improvement.

Lead Time Cycle Time


Cycle time starts when the work begins on the
Lead time starts when the request is made by the
request and ends when the item is ready for
customer and ends at the delivery
delivery.
Lead time is from point to point includes waiting, Cycle time is important because it has a major
as well as other delays. It’s also affected by the effect on productivity. This leads into the
actual versus planned speed in cycle time. organization’s competitiveness and market share.
Lead time is measured in time, minutes, hours, Cycle time and lead time have different units of
and days. measurement. Cycle time is measured in rate

Reducing Cycle Time


We shall now try to explore the concept of reducing cycle time, which is considered as an important element of the Lean
philosophy.
Benefits of Reducing cycle time to an organization
• It really helps to reduce waste within the process.
• Reducing the waste and cycle time enables a significant cost reduction for an organization.
• As we reduce the cycle time, we free up some of the availability and can increase the productivity.
• As we eliminate some of these wastes and reduce the cycle time, we will have an increase in the quality and a decrease in
the product time to market. All of these benefits help to improve the level of customer satisfaction.
• Reducing cycle time translates into quantifiable impact on the entire business.
For instance, if we reduce cycle time by 30 to 70%, which is not uncommon for lean Six Sigma projects, there are several
key benefits that we can quantify.
• It translates into a return on the investment of anywhere from 20 to 105%.
• We can see an increase of revenue of 5 to 20%.
• Reduce the inventories by 20 to 50%.
• Reduce the invisible inventories by 20 to 60%.
• Decrease the delivery lead times by 30 to 70%.
• Reduce the time to market by 20 to 70%.
Besides these direct benefits, the dollar value amount that comes from a one month delay or accelerating the projects could
mean millions of dollars for large companies. Also as the cycle time is shortened, the quality also improves. The main key
benefit of reducing the cycle time is that we are improving the customer satisfaction, when we are able to get the product and
service delivered on time. We can also experience the impact of cycle time reduction through four common business cycles.
So, when we focus on reducing the cycle time in the improve phase of the Six Sigma projects, it helps to impact the overall
business cycles. We can reduce the time to market by reducing the time to market; we are decreasing the time that we’re
actually able to deliver the product to the customer.

Continuous Flow
As we move through the improvement phase within the DMAIC methodology, part of the improvements should
relate to how we could implement lean tools, such as continuous flow. Here, Continuous flow means there is an
absence of interruptions, problems, delays, or backlogs within the process. Some of the features of Continuous
flow are

•Continuous flow is more of a time-based methodology, where the focus is on reducing defects by improving the logistics.
•Continuous flow delivers a flow of products to the customer with minimal delay.
•Another characteristic of continuous flow manufacturing is the use of units made up of the smallest logical level either in
one piece or a small batch at a time.

Right sizing batches is important because batches that are not the right size lead to queues, leading to waiting,
which ultimately leads to bottlenecks and delays, poor space and resource utilization, and to longer cycle times.

Consider a product or service moving through a long production line. So any delays early in the process will delay
later steps in the process. In a complicated process, upstream delays can cascade through the system, causing
multiple time delay problems.

In a continuous flow process, a group of machines or work stations is called a cell. Production steps are arranged
in a characteristic tight sequence which flows through a U-shaped cell arranged for optimum operability and
maintainability. Inside a cell, each workstation is organized to achieve continuous flow at the desired tack time
and a final key characteristic is that continuous flow uses a team approach throughout the process which
encourages workers to accept and lead the improvement initiative.

It is important to provide training and encourage buy-in from employees, and other process owners. With
continuous flow we ideally have a just in time process, where the material or product flows smoothly through the
process without any interruptions. We can focus on setting up the supplies so that we are taking into account any
peaks or valleys that might occur in the demand. By doing this, we can really level load how the process would
work and take into account any fluctuations or variability in that demand.

Illustration:
We shall now examine continuous flow using an example – There is a large manufacturing company that
produces stock kitchen cabinets is working with a Six Sigma team to adapt its production facilities to continuous
flow manufacturing. In the past, production has been plagued by quality problems, inconsistent work methods,
large batch production, excessive inventory, and extensive non-value added work.

Through the Six Sigma initiatives, some changes were made –

•First, the team determined that the old large batch process caused inefficient parts flow. Since parts are spread over a
large area, requiring workers to take unnecessary steps to retrieve what they need.
•Additionally, specialized machinery is located away from the main production line, increasing inefficiency. A new
workflow was designed that utilizes smaller batches.
•Next, the production flow is divided into a series of U-shaped cells, consisting of individual workstations. Each cell was
marked with new equipment and inventory locations.
•Material storage units were built into each cell. Team members also talked to employees about the benefits of continuous
flow manufacturing and worked with them to design and implement the new system.
•Cells were reviewed with process owners and the employees were given the opportunity to provide additional input on
the design of each workstation.
•Finally, training in new methods was given to each employee. Handbooks and documentation were provided at each
workstation.

Setup Reduction
In today’s competitive market and customer expectations for diversification lead to more frequent setups and
changeovers which, according to Lean, is a waste. We shall now explore setup reduction and single-minute
exchange of dies, or SMED, a useful process for reducing setup times. Product diversification means that we
need to produce smaller batches and change over more frequently. This also means that we typically have
shorter production life cycles, because customers are expecting changes more frequently within their products.
Changeover and setup leads to non-value added activities, so the goal then should be to reduce them.

Several benefits of Setup Reduction


•When we are able to reduce the setup time, we can improve the quality as we can make sure that we are flexible enough
to make changes to the products to meet the customers’ demands.
•We are also able to lower the cost, because we are not holding so many inventories on hand as we are not focusing on
producing as much as possible before we change over, as the changeover takes so long.
•By reducing the setup time, we are able to change over more often, which gives we much more flexibility and we are
able to better use and manage the resources.
•It also helps we increase the capacity, because we can change over more frequently.
•It reduces the lead times since we are able to change over more frequently and produce products in time to ship them or
deliver them to the customers.
•Finally, by standardizing and reducing the setup times, we’re able to reduce wer process variability.
One of the most common methods for reducing setup time is single-minute exchange of dies, or SMED. With SMED we
are trying to reduce the time it takes from running one product to the time we start running another product. Essentially,
when we finish running product A, and we have the last good unit of production for product A. We then have all of the
activity required to change over from running product A to running product B. The changeover time is the time from
when we make the last good unit of product A until we make the first good unit of product B. So this involves changing
out fixtures, tooling, cleaning down the machine, doing any quality checks, making adjustments, and then checking the
product again. The overall objective is to get this time down to zero. One fun little thing to keep in mind, when we talk
about single-minute, it actually means that it’s under ten minutes. Another thing to note is that even though this concept
came out of changing over dies, where the name of the methodology came from, this changeover time applies to any type
of changeover that’s necessary.

Let us take an example, if we consider the healthcare industry, when we are changing from the operating room
for one patient until it’s ready for the next patient that’s the changeover time. Implementing SMED has several
advantages over traditional production line manufacturing setups. It is a common practice to produce large lots
of a particular type of product and then switch over to another product.

•We are able to change over more frequently we can reduce the lot size.
•We are making each type of product more frequently we are not holding as much inventory on hand, so we can reduce
the inventory.
•We can change over more quickly and we have a systematic procedure to do this, we will be able to reduce the labor
cost, and reduce any bottlenecks within the process.
•We can increase the usefulness and capacity of the equipment, because we are reducing the amount of time it is down for
a changeover.
•We standardize the processes as well as reduce the scrap that’s created while we change over the process.
•We are putting in place tools and techniques to make the changeover efficient and effective.

SMED Process
We shall explore the six core steps of the single-minute exchange of dies, or SMED, process.

•The first step of the process is to organize the setup operations. We want to go through and outline each step that’s
required for a setup. We want to note whether each of those activities falls into a waste category, which is something that
we can remove, if it’s an internal setup, or it’s an external setup. An internal setup is something that’s done internal to that
operation and so the process has to stop for that to be conducted. If we think about a piece of equipment, this would be
changing out anything internal to the machinery. So the machine has to be stopped while we’re inside, for example, taking
out a fixture. Also anything that’s external could be done while the machine is still running.
•In the next step as we have organized the setup operations and know where the waste is within the process, we want to
go through and eliminate that waste.
•Then the third step in the process is to convert the internal to the external. Again, internal operations are things that are
done inside the machine. So the machine has to be stopped. It can’t be running when we perform these activities. Now
when it is external, these are things that still can be done. So we can still be performing activities or if we think about this
with the manufacturing setting, the machinery can still be running. So we want to convert as much from internal to
external as feasible within the turnover task and processes. Because any time we’re performing those internal operations,
we are not producing parts. So now what do we do about those internal processes that we can’t convert to external?
•Here in step 4 we aim to improve internal setups i.e., we want to organize the workspace and decrease movement as
much as possible. So that we can get in and do the necessary steps as quickly as possible, and then move back to running
the operations. Next, we want to move on to improving the external setups. We want to provide checklists, for example,
so that we can make sure that we have everything ready. When we shut down the piece of equipment, we have all of the
tool, inspections, the next fixture, all of that ready to run the next product. It’s also important because we want to make
sure that it’s organized. That we use such tools as 5S to have everything clearly located where we need it for the
changeover. We could also use color coding to make sure that we have got the right components and parts for the next
product. So it makes it easy to make sure we have exactly what we need.
•Finally, the last step of the SMED process now that we have all of the operations in order is to develop a standard
operating procedure, or SOP. So we have documentation on how to go about performing the changeover. SOPs and
standard work are related in SMED, and often include visual aids and mistake proofing measures to ensure we get it right
the first time when setting up.
The primary goal of any Six Sigma initiative is to reduce waste in processes and an effective tool to use for that is Kaizen.
Kaizen is a Japanese term, and it roughly translates to small, incremental improvements. We can use Kaizen as part of the
Lean Six Sigma improvement events to drive continuous improvements, using small, incremental steps based on specific
projects. Typically, Six Sigma projects result in improvements based on what we are implementing. However, there can also
be a little bit of backsliding, or degradation, before we move forward with the next project. By driving and implementing the
Kaizen events into the process, we can have further improvements before we start the next project, and we can start noticing
that further improvement. Kaizen events are particularly useful when we are dealing with capacity constraints, setup
reductions, acute quality issues, or safety issues. Building Kaizen events into the Lean Six Sigma projects helps to improve
the company’s bottom line, and helps to ensure that we are improving upon those original improvement solutions.

Steps in Kaizen
• In the first stage, we state the problem and solution. So we would start by developing the problem statement, determining
what the required goals, objectives, and deliverables are, and communicating these with the stakeholders. A common
Lean Six Sigma tool we will use in this phase is a Kaizen event worksheet.
• The second stage is to gather and train the team. In this stage, we are choosing the project team. We in this stage choose
the project team members based on their individual talents and their collective ability to work cohesively. Team members
receive training for the specific tools and tasks needed during the project. Finally, a schedule is developed to ensure that
the key project milestones are implemented in a timely manner.
• The third stage of Kaizen, team members collect data on which to base decisions that will allow them to manage the
actual Kaizen event. Based on the gathered data, metrics are formed. And the current process is mapped through the use
of various Lean and Six Sigma techniques and tools, such as flowcharts, time study sheets, control charts, process
capability ratios, and ANOVA.
• In the fourth stage, the team analyzes the data gathered in stage three to identify areas for improvement, formulate ideas,
determine the root causes of problems, and evaluate proposed improvements. Tools typically used in this stage include
fishbone diagrams, 5S, capacity charts, and standard operating sheets.
• In the fifth stage of Kaizen, the team makes improvement recommendations and ranks priorities based on the analysis of
data completed in stage four. Recommendations are prioritized based on estimated financial savings and the amount of
waste eliminated. It’s now that roles, responsibilities, requested resources, metrics, and measurements are revised for each
process improvement. Common tools used include project plans, action plans, flow diagrams, and Gantt charts.
• In the sixth and final stage is the action stage. This is where the team communicates the plan and implements and
monitors changes. Other stakeholders, such as suppliers or functional employees, are often involved in making these
improvements. Regular update meetings are held and progress is measured using metrics. Documentation is completed, a
Kaizen storyboard is created and the process improvements are assessed to determine if they can be migrated to other
areas. In this stage, we will typically use a rollout plan, stakeholder training, and the Kaizen storyboard. It’s important to
note when we should use Kaizen. It’s best used when the problem is clearly understood. If we don’t understand the
problem, then we really need to drive down and understand the problem before we move forward with it.
Kaizen is also very beneficial when the team is opposed to change since they have to go through and map the process. Now
through each of those six stages, we are going to have the team involved. Kaizen also provides a quick turnaround.
Therefore, it’s very useful when we require immediate results. It’s also very useful when we go after low hanging fruit. In
other words, Kaizen is useful when the solution requires fairly minimal effort. And because Kaizen provides those immediate
benefits, it’s useful when we have projects where the resources are too limited for long term process improvements. There
are three types of Kaizen events, and each has its own duration. A Kaizen project is typically two to four weeks, a Kaizen
blitz typically occurs over three to five days, and a Kaizen super blitz typically occurs over one to eight hours. Each of these
types of event has a significant difference in duration. Which one we use will really depend on the goals of the project and
what we’re trying to accomplish.
We could use a two-by-two grid to determine the type of Kaizen event that’s appropriate for the project, or if Kaizen is not
appropriate and a long term continuous improvement approach would be best. We consider the effort that’s required in terms
of time and cost on the x-axis and then the impact on performance on the y-axis. The impact of performance refers to
efficiency, cost, and customer satisfaction. So if the effort and the impact are low, then we will have a fair project. If the
effort is high and the impact on the performance is the worst, then this is something we want to try and avoid. If the effort is
high and the impact is high, we would need to consider how we would separate these out into smaller projects. Using this
type of grid, we can identify if Kaizen is appropriate and which type to use. Kaizen events are ideal when the effort is low,
but the impact on the performance is high.
Kaizen Blitz
Kaizen projects are events that focus on improving the value stream. In general, these projects unfold over weeks. But the
lean methodology also uses a technique called the kaizen blitz.

“A kaizen blitz is a planned kaizen event that’s conducted over a very short period of time, typically 3 to 5 days and within
that short time period, we are trying to enhance the process improvement efforts by eliminating non-value-adding activities.”
A kaizen blitz differs from a kaizen event because it’s much more focused and has a considerably shorter duration. But we
are still searching for a tangible outcome. On the contrary, kaizen events are focused on improving the entire value stream
such that these projects typically occur over several weeks. Now it’s time to explore the difference between a kaizen event
and a kaizen blitz and how that differs from lean and Six Sigma.

Kaizen Blitz Kaizen Event


In a kaizen blitz, we are trying to do these events Kaizen events are focused on improving the entire
over a very short period of time, typically over 3 value stream such that these projects typically
to 5 days occur over several weeks.
We really want to go out after those low-hanging We are going after things that are a little higher up
fruit with very specific goals from those smaller kaizen blitzes.
   
   
Now, even though a kaizen blitz focuses on the low-hanging fruit and not the higher goals in a wide-scaled Six Sigma
initiative. A kaizen blitz still fits in very nicely with lean Six Sigma efforts. In which case, we would still have the same need
for management and sponsor support from the top of the management chain and we would also need to have that clear
foundation with the Six Sigma team and the workforce engagement.
Also, in a kaizen blitz, we will use tools such as 5S, waste reductions, cycle time reduction, SMED, and flow and pull. We
also have five phases of planning, measuring and analyzing, brainstorming, implementing, and fine-tuning and confirming
somewhat similar to the DMAIC methodology

Typical Kaizen Blitz Flow


• On day 1, the project kickoff a meeting to communicates the purpose of the project to the team. Such that following tasks
are established,
• Problem statement is created and the scope is narrowed and defined.
• Project ground rules are established for roles, responsibilities, workload, resources, and boundaries.
• Work plan is created to document project activities, estimate time and resources, and identify measures of progress.
• Decide which tools will be used to gather and analyze data.
• On day 2, team members begin to collect data and conduct interviews with workers. They directly observe the current
processes, take measurements, and identify items from the seven categories of waste. Following task are taken into
account,
• A process map of the current work area or process is developed.
• Team will continue to analyze data as it’s gathered and will begin developing metrics for workspace use, distance
travel, throughput rates and lead times for each step in the process.
• Ideas for waste elimination are prepared for presentation the next day.
• On day 3, there will be team meeting to brainstorm improvement ideas for fixing the constraints and eliminating the waste
identified on day 2. Each idea is evaluated in light of the collected data and the resources required implementing it. Tools
such as affinity diagrams and checklists help the team reach consensus about which improvements to implement. Once
the improvements have been determined, the team establishes what will be needed in terms of equipment, tools,
templates, documentation, and work standards to carry out the improvement.
• On day 4, the collected data is incorporated and recommendations are approved for process improvement into the
project’s action plan. Team members are assigned responsibilities and the project work plan is expanded and updated. A
timeline is created that sequentially offers improvement actions. During the day, the team members will meet with
operators and the process owners to explain and demonstrate process changes and new procedures. The improvements are
implemented. Then throughout the day, we’ll observe, evaluate, adjust, re-observe, reevaluate, and readjust processes in a
refinement cycle.
• On day 5, the team refine, approve, and document process changes. Standard operating procedures are drafted and then
tested to confirm that measurable improvement has been accomplished. Final refinements are made and the process is
documented as standard work. Changes that require time to implement are incorporated into a future action plan. Finally,
the project is documented into a kaizen storyboard for presentation to everyone impacted by the new processes.

Variations of Control charts


Now in the Six Sigma journey, we have gone through the various stages of Six Sigma DMAIC methodology and
now it is the Control phase. By this time, we have analyzed and improved the processes and achieved the desired
level. Therefore we want to make sure that we are sustaining those gains. So the purpose of the Control phase is
to really monitor and control the processes over an extended period of time to make sure we can sustain those
gains, in order to do that, one of the main tools is the control chart. A control chart is based off of the normal
distribution where we are looking at the difference from the mean in terms of our standard deviation. Control
charts are essentially based on six different groups and these groups are coming from the difference in our
standard deviation from the mean. Now when we talk about our normal distribution, if we are running at our mean
plus or minus one standard deviation, we are capturing 68% of our data. If we are running at plus or minus two
standard deviations, we’re at 95% of our data, such that at plus or minus three standard deviations we’re
capturing 99.7% of our data.

Now while using the normal distribution and control charts, when we hit the plus or minus two standard
deviations – these are typically considered as warning limits that something might be happening within our
process. Now when we use our plus or minus three standard deviations, this is where we’re capturing our upper
control limit and our lower control limit and anything outside of our upper and lower control limits represents an
out of control symptom.

Here it’s important to note – when are creating the control charts, we are capturing information from samples.
Very less often do we have enough time and money and effort that we can capture information about the entire
population. Here we are making multiple observations from a subgroup, our sample, and we are making
inferences about our total population. So, depending on the size of the subgroups, we would use different control
charts and even within that, we may also use a specific control chart if we have got subgroup sizes that vary.

Let us take an illustration to demonstrate this further, if we have a Six Sigma Green Belt examining the quantity of
chipped windshields in an automobile assembly process, if the Green Belt samples the output of the assembly
line five times daily – and we are doing this for 30 days – then the subgroup size is 30 and we’re getting five
observations per subgroup. Now one more important thing to notice is that we need to make sure that we are
selecting our subgroups so that the chance for detecting the differences between the subgroups is maximized.
Here, while we are trying to balance that with the chance of finding differences within the subgroup, we want to
minimize that. Therefore within this course, we will also provide a recap of rational subgrouping. So when we are
looking at our control charts what we are trying to understand the difference between common cause variation
and special cause variation (or assignable cause of variation). 

Common cause of Variation: When we have common cause variation, with common cause we have a stable and
predictable distribution over time, when we think about mean and our standard deviation, and common cause variation
provides that consistent output over time.

Special cause of Variation: When we look at special cause variation, special cause – or assignable cause – variation is
when we have points outside of our control limits or when we have various trends that are showing within our data.
Therefore when we look at our control charts, we need to understand the causes of variation. The main causes of variation
could be –
•Machines: Frequent machine breakdowns, poorly designed equipment, broken or worn tools or general deterioration
of the machines.
•Manpower: Another cause of variation is manpower, so this could be worker fatigue, poor attitudes toward work, lack
of proper supervision, or poor training.
•Methods: We could also have variation in methods and these could be incorrect or unclear methods.
•Measure: With measures we could have poor measuring systems.
•Mother Nature: With mother nature these could be things like temperature and humidity within our process.
•Material: Finally we have materials so these could be wastes within our facility, parts, or batches mixed together.
These are all sources of variation and we need to understand within control charts whether these are special cause of
common cause variation and how these are impacting our system.

Particularly within the Control phase, we want to understand and monitor to make sure that we are reducing the
impacts of these causes of variation on our output.

Selecting a Control Chart


Now while we are using control charts, it is essential to make sure that we are selecting the appropriate chart,
because this is a critical element in making sure that we’re moving down the right path and we’re collecting the
right data. If the wrong chart is selected then the control limits will not be correct for the data. So there are
several key factors that are important when determining which control charts we should use in certain situations.

For instance, one of the first key factors to understand is whether or not we are using variable data or attribute
data. In addition, we need to know the size of the subgroup and also if the subgroup size varies. Then, finally, we
also need to know what type of distribution that we’re using. If we’re looking at attribute data – is it binomial or is
it Poisson. Now remember, with binomial it is 0 or 1 and with Poisson it could be 0, 1, 2, 3, or any other integer.
Once we know this basic information, then we need to use this to ensure that we are selecting the correct control
chart. In order to do this, there is a control chart flowchart that helps to make sure that we are selecting the right
type of data. This starts with determining what type of data that we have – variable or continuous data or
attribute or discrete data.

We first consider the variable or continuous data. Now, once we know if we have variable or continuous data, we
need to know then what type of subgroup size we have. If we have a subgroup size of one, then we use the
individual moving range chart. But if our subgroup size is greater than one, then we need to determine if it’s an
Xbar and R or an Xbar and s chart. If our subgroup size is between 2 and 10, we use the Xbar and R chart and if
our subgroup size is greater than 10, then we use the Xbar and s chart.

We go through a similar flow when we look at the discrete data. We start looking at the type of data, the size of
the subgroup, and then the distribution. So when we are looking at the variable data everything here is
constructed really in pairs. When we look at attribute data, we’re looking at the single data, we are not looking at
the ranges or the standard deviation; we are looking at the actual attribute data. So once we know that we need
an attribute or we have discrete data, we look to see if we’re using count data or classification – looking at the
number of defectives.

•If we are using count data, then we need to know if we have equal-sized subgroups.
•If we have equal-sized subgroups, then we use a c chart.
•If our subgroup sizes are not constant, they are not equally sized, then we use the u chart.

Now when we are looking at defectives and classification information – and this essentially means that we could
have more than one aspect of the part defective – then we look at whether or not the subgroup sizes are equally
sized. If they are, we use the np chart. If the subgroup sizes are not equal, then we use the p chart.

Let us take a closer look at some of the control charts for variable data.
If the subgroup size is one, then we are using the individual moving range chart and with the individual moving
range chart, the first chart is the actual individual value that we’re plotting. So, similar to a run chart, we’re plotting
the individual values and then we’re plotting the range between each individual value. With the Xbar and R chart
we’re plotting the means of our subgroups and, again, this is if our subgroup size is between 2 and 10 and then
our range is the range of that individual subgroup.

Then the Xbar and s chart captures the same information in the mean chart, it’s the mean of that subgroup, but
the standard deviation, then, in the s chart is used instead of the range, because our subgroup size is greater than
10. Since our subgroup size is greater than 10, our standard deviation is a better indicator of the variation within
our process. Now when we look at the control charts for the attribute data, we have our control charts for our
account data and our control charts for our number of defectives. When we are using our account data, we have
our c chart that looks at how many defects are within that subgroup. And then when our sample size varies, and
we have unequal sample sizes, we use the u chart. Now when we look at our defectives, when could have more
than one aspect wrong with the product, we use the np chart. And when our subgroup size samples are unequal
with our defective data, then we use the p chart, which is our proportion chart.

Illustration: Let us assume that we’re part of a Six Sigma project team and we’re working in an electronics
manufacturing facility. We’ve determined that the temperature of the soldering equipment is essential for the
product’s quality, but due to cost and technical constraints we can measure the temperature only once every
hours in degree Celsius. Since we are measuring temperature, we know that that is a continuous variable. But we
want to know, then really, what chart we want to make sure that we use in this situation so that we can monitor
the performance of the process over that period of time. So since we know that our data is continuous, we would
go through and look at our variables charts. And so now that we’ve selected the variables charts, we know the
subgroup size is equal to one so we would select the Individual Moving Range, or ImR, chart.
Now let’s take a look at another example. Suppose we’re monitoring a process for invoicing a process
improvement project. We want to know the stability of the invoicing process by counting the number of errors
that occur in it. And we’re going to randomly select a subgroup size of equal size, that’s equal to 200 for our
subgroup size. And we’re going to select these from the invoices generated over 15 weeks. So we want to know
which control chart we should use for this situation. Since we’re going to determine if the process is in control by
counting the errors in the invoice, we would know that we would need an attribute chart since its discrete data.
And we’re counting the number of errors in the invoices, which tells us that we’re monitoring defects. And then we
also know that our subgroup size is remaining constant at 200. So based on this information, we would select the
c chart.

Selecting a Control Chart


We will choose the correct control chart to use in different situations.
Data Trends and Decision Rules
There is several decision rules within control charts that helps to interpret what the control chart and what the data is
interpreting. As a Six Sigma team, it is important to recognize these trends in a control chart because they essentially set off
warning bells that indicate an out of control process. For instance, if we have a point that’s outside of our control limits. Now
let us take a look at some of the different trends that indicate an out of control process and there are six commonly used
trends to indicate that.

Freaks: In this is a control chart the points are more than plus or minus three standard deviations away from the center
line. Such that these are our points, that are outside of our upper or our lower control limit.

Systematic Variation: The second type of control chart is for systematic variation. Within systematic variation, a control
chart with 14 or more data points that are alternating up and down. These trends are based on, if we think about natural
variation, the process normally wouldn’t have 14 data points or more that alternate up down, up down. This indicates that
something is happening within the process and that the Six Sigma team needs to go in and further investigate what’s
happening within the process.

Process Drift: In the process drift we have a run of six or more data points that are either steadily increasing or steadily
decreasing.

Process Shift: The next type is process shift, with process shift, we have a run of nine or more data points in a row all in
the same side of the mean and, again, if we think back to natural variation within our process and our normal distribution,
we would expect points to alternate on either side of the process mean. With process shift, it’s also commonly known as
process shift up or down.

Stratification and recurring cycles: The next two types of trends are stratification and recurring cycles. With
stratification, that’s when a control chart has patterns of 15 or more data points in a row that are clustered around the
center line. So again, if we think about natural variation within our process and our normal distribution, if we think about
plus or minus one standard deviation within our process, that’s capturing 68% of our data. With 15 or more data points,
we would expect with natural variation within the process, some of those points to be outside of that one sigma limit.

Recurring Cycle: With a recurring cycle, that’s when we have a control chart that’s demonstrating any other visible
recurring pattern or cycle. And this is where the Six Sigma team really needs to go back and understand those patterns or
cycles within our data that we would expect to see.
With above most common cause of out of control charts there are also early warning signs. Now, based on those previous
methods, if we are looking for 15 data points in a row, then that could take some time. And during this time the process
might be running out of control. So we also want to look at other trends that are more of the early warning signs. For
instance, if we have two out of three of our data points that are beyond two sigma from our center line on the same side or
four out of five points that are more than one sigma from the center line, that are on the same side. Those are our early
warning signs and that helps us react a little bit more quickly than waiting for having 15 data points that are showing us a
trend. Now the other key aspects are – there is a caveat to this. There are some trends that do not appear to violate the rules
re, but they still need to be considered as alarm signals. These are things such as shift, drift, repetition, patterns, and
stratification. Those are our visible signs, but in many Six Sigma situations control charts don’t map to any of these broad
rules, but they still indicate those visible signs of shift, drift, repetition, patterns, and stratification.
As a part of the Six Sigma team, and as a Six Sigma professional, we need to be able to dig deeper to find the possible causes
and then identify the appropriate corrective measures. Another key aspect when we talk about control charts is, when we talk
about our individual moving range chart, interpreting it is slightly different from interpreting the other types of charts
because with the individual moving range we have a subgroup size of one. So our subgroup sizes are not independent and the
moving averages are smooth. This gives us slightly different result and we need to take that into consideration while we are
interpreting our moving average charts.

Responding to Trends
As we have discussed about the common trends to look for with our control charts, it is important to use that information to
identify appropriate corrective actions – as if we look at each of these, each type of trend gives a sign that indicates some sort
of specific problem.

FREAKS
Lets us consider the first trend that indicates out of control process – the freaks. With the freaks we had a point that was
outside of our control limits, so it was beyond that plus or minus three standard deviations. The potential cause of having
such variation is when we have over control, or we are using different methods for testing, or potentially we’re using
materials for various suppliers that are mixed up, or they’re of different quality levels. So in order to take the appropriate
corrective action to respond to these types of data points, we could look at preventing our operators from over adjusting our
process, they could be over tweaking the process with their over control. We also want to ensure that the variability in the
machines and the operators is reduced to a minimum. Also looking at the variation in our incoming materials, because we
could be getting that different level of quality from our different suppliers.
Important point to note here is that that we have considered some general corrective actions that could apply when we
discover these trends and patterns, but these are very specific to the type of business or production that we’re actually
working in. Therefore these need to be tailored to the appropriate context for the environment and different situations will
require different actions. Therefore it’s really impossible to cover all of the possibilities, but thinking about these with the
possible trends that we’re discovering, and then taking this into the appropriate context for the organization.

DRIFTS
Now let us take a look at how we would respond to drifts within our data. In drifts, we are talking about six or more data
points that are either increasing or decreasing. Some of the potential causes of drifts are – material deterioration, operator
fatigue, worn tools, the operator skill is either improving or deteriorating, or there is a change in the quality of materials.
Some of the ways that we could respond to this change in our output is, to maintain the machine that’s wearing down, so
make sure that we have appropriate maintenance. If we can pinpoint it to a specific operator, do a root cause analysis to
determine why that operator might be having problems, and then take the appropriate action to train the operator. If it is
occurring because of a broken tool, then we need to repair the broken tool and look at the tool life.

SHIFT TREND
The next type of trend is our shift trend. This is when we had nine or more data points on one side of the mean. Now this
could occur when we are obtaining more or less material from different sources, we have new machinery or operators, there
could have been a change in our production process, changes in the way that we are inspecting within our process; and then
also changes in materials, methods, or operators, or inspections. In order to take corrective action when we have a shift that’s
occurring up or down, there are three key aspects that we could address. The first is to ensure that the material supply is
consistent. So working with our suppliers, and then next we could also make sure that our operators use the same methods
and instructions over a reasonable period of time. And then third we could look at the calibration of our devices being used
for the measurements.
SYSTEMATIC VARIATION
The next type of trend is systematic variation. This is when we had 14 data points that were alternating up and down. So
there are several possible causes for this type of systematic variation. The first is over control, so we could be over tweaking
our process. There could be differences in the methods that are used for testing. Differences in the material quality and then
also using materials that are mixed or they’re from different qualities. Primarily, there are four key ways that we could take
corrective actions. We could make sure that the control limits are actually set at the appropriate or correct levels. So we
might need to go back and recalculate our control limits. We need to check our testing procedures. We should also determine
if the inspection should be performed differently or occur more frequently. Finally, we need to make sure that we’re
standardizing the materials that are used.

STRATIFICATION
Now, the next trend is stratification. This is when we have 15 data points around the center line. Some of the possible causes
of stratification are stated here.  With stratification, it most commonly associated with variables control charts. So we’d want
to look at that information as well. Potentially our control limits have been incorrectly calculated or we’ve actually improved
our process to a point that we need to go back and recalculate our control limits, because we have reduced the variation
within our process. In addition, there could be the presence of two or more cause systems and then, finally, if we have our
samples that are containing many measurements from different lots; that average value could cause it to be centered on the
mean. Now there are two key ways that we could respond to stratification variation. The first is to ensure the checking
procedure is being followed. So working with our operators to make sure that we’re using a consistent process and then
second, we want to make sure that our employees are taking the measurements properly and they’re following procedures for
the rational subgrouping to make sure they’re getting a representative sample.

RECURING CYCLES
Next is how we would respond to recurring cycles which represents other visible patterns that are occurring within our data.
This is where the Six Sigma team would really need to dive in into understanding what’s happening where we have kind of a
consistent pattern that’s occurring over time that’s repeating itself. Some of the possible causes of recurring cycles are –
physical factors such as temperature, humidity. For example, it might be more humid during the day than it is at night or
operator fatigue we might see changes in the morning versus the afternoon.
We could also see differences if we have machines or operators that are being regularly rotated. In addition, scheduled
maintenance and worn tools could affect these recurring cycles. So there are four key ways that we could respond to the
trend of recurring cycles. The first is: if we could adjust the environment factors, we should adjust those if possible. If we
can relate this back to temperature or humidity and next, if we are finding issues with our machine maintenance or worn
tools; we need to make sure that we’re regularly maintaining our equipment. If we could associate this with operator fatigue,
or our rotation, then we should look at how we are rotating our operators and how we could change our operators when we
see the signs of fatigue. Then if we’re seeing issues with worn tools, then we should look at the tool life of our equipment to
see how we should better schedule replacing our worn-out tools.

Constructing and Interpreting Control Charts


Formerly as we determine the type of control chart we are going to use based on the control chart flowchart, then there is
five key steps that are used to create the control chart.
• The first step is to plot the data points
• Then we connect the data points as a time-ordered series
• Third step is to determine and draw the center line
• Fourth step is to determine and draw the upper and lower control limits
• Fifth and final step is to analyze and interpret the chart.
Above the five key steps for creating the control chart and each of these steps will be explained more in the remainder of this
topic. Here the first two steps involve plotting the data points and connecting the data points as a time-ordered series.
• The first step is using the information that’s being collected over time. We would plot each data point as we’re collecting
the information.
• In the second step we connect each of those dots so that we can look at our data as a time-ordered series. Also it’s
important that we’re looking at our data as a time-ordered series, because we’re using this information to look for trends
and patterns within our data.
To illustrate this further let us suppose we have a Six Sigma team in a manufacturing company and the team is examining the
diameters of bearings in their operations processes. At this point the team has the machine operator pull a sample subgroup
of seven cylinders every hour and they do this for eight days. Before the team can begin the control charting process, they
need to first determine what type of control chart is best for this situation. So by looking at the type of data, which is the
diameter of the bearings, the team knows that this is a continuous type of measurement. Also we know the subgroup size –
we are pulling seven bearings every hour for eight days. Based on this information the team determines that they should use
an Xbar and R chart. Since the data is variable, and because we’re using this with a smaller sample size, we have a subgroup
size of seven, which is less than 10 and therefore we use the range chart.
Then using this information, the team charts the average value for each subgroup on the mean chart and then they plot the
range of each subgroup on the R chart. Now as they are doing this, the team plots the data on the X and Y axes, and plots the
data points, and then connects the data points as a time-ordered series as we look at the samples over time.
The next step in the process is, to determine the center line. Here the center line represents the mean value for the process
and it helps to see how far the process is deviating from the mean. Determining the center line might be slightly different for
each process. For instance, if we look at the center line for the attribute data control chart, it uses a within and between
subgroup average. The center line helps show the Six Sigma team how far the variation is, the data points, are from the norm
and potentially how serious this could be if we have process fluctuations.
Now using this information and looking at the Xbar and R chart, when we look at the Xbar chart we get the average for our
Xbar chart as the mean for all the observations within each individual subgroup. This determines the within subgroup
average. Then to get the average of each of these X-bars, for each subgroup that gives us our grand average or X-doublebar.
That gives us our between subgroup average and that’s our grand average or grand mean. And it’s represented by the X-
double bar and this is a center line in the Xbar chart.
Similarly, if we look at the average of the subgroup a range that gives us our R-bar, which gives us the center line for our R
chart. Now the last two steps of the process are determining the upper and lower control limits. In order to determine our
upper and lower control limits, we need to make sure that we’re selecting the appropriate constant from a table. Now
typically our upper and lower control limits are three sigma above and below the center line.

Interpreting Control Charts


When we interpret our control chart, this helps us to take the appropriate corrective actions. Here we are looking at points
outside of the upper and lower control limits and seeing if there’s any trends within our data that might suggest that we have
an out of control process that needs to be investigated.
In the fourth step of the process, this is where the team determines and draws the upper and lower control limits based on our
plus or minus three standard deviations calculations. If we look at our Xbar and R charts, we are using values from a table to
calculate those. Those values are our A2, D3, and D4 values. These are all constant values that are used in our upper and
lower control limit formulas. Then, based on this information, the team can perform the final step, which is to analyze and
interpret the chart based on the trends within our control charts.

Xbar-R and Xbar-s Control Charts


We shall now consider the control charts for variable or continuous data. Now it is important to note that, when
we talk about control charts for variables data, our information is shown in pairs. Here we are typically looking at
the average of our data, which is captured with our Xbar chart, and then the second chart that we use is, our
second part of our paired information, is the R or s chart. And that looks at the range or the width of our data and
how much distribution we have within our process. Also together we can use these charts to really graphically
illustrate when we have the presence of special cause variation. So within this topic, we’re going to look at the
average in range, which is the Xbar and R chart, and the average in standard deviation, which is our Xbar and s
chart. These are the two most commonly-used variable control charts that are used throughout the DMAIC
methodology.
As we start looking at the Xbar and R and the Xbar and s chart, it’s important to understand the key differences
between these two charts. When we look at the range chart, we are using this when our subgroups have 10 or
less data points within each subgroup. The standard deviation chart, the s chart, is used when our subgroups are
greater than 10. This is because when we have more than 10 data points our standard deviation is a better
indicator of the variation or the distribution of our data. Now, when we talk about the R chart, we need to ensure
that we have a fixed subgroup size. However, with our standard deviation they can vary in size.

Illustration: Let us take two examples to see when we would use each chart. First, let’s take a look at the Xbar
and R chart. So suppose we have a situation where we want to find out the range of variation in the machines that
are making glass cylinders. If our team pulls the subgroup sample of five cylinders at one hour intervals, and then
they measure the cylinders, they could then chart the average value for each subgroup on the Xbar chart. Then
they would use the range of each subgroup with the R chart, because they are pulling a subgroup sample size of
five.
Now let’s take a look at the Xbar and s chart. Suppose we have a team that’s using the Xbar and s chart to check
the control of an improved process. The team checks the inside diameter measurements of tempered glass
beakers. Each subgroup consists of five beakers and then the team lead calculates the sample and sample
standard deviation for each of the 25 samples. So the Six Sigma Green Belt can now use these charts to
determine if the process is out of control, or not, and if the limits could be used to monitor the process. Another
chart that could be used is the median chart such that the median chart differs from the Xbar chart, since we are
using the median of the data versus the mean of the data. The difference in the chart is that it’s plotting each of
the data points pulled within the subgroup and then the main point that’s plotted is the median of those points.
Within the median chart, the middle point is when the data points are arranged from high to low.

It is important to understand when we should use the median chart. We should use the median chart when our
process is not completely normal. The other key aspect of the median chart is that it shows all of the measured
data and it’s plotted not just as a subgroup. And so it’s important, because this help shows really the spread of
the data rather than just plotting that one data point; we’re plotting all of the different data points. So it helps
better illustrate the spread of the data in addition to the median value. And it is particularly helpful when we have
subgroup ranges that vary greatly, because it helps to show us that dispersion.

Creating p-control chart


We have previously discussed control charts for variables data and we shall now take a look at control charts for
attribute data. Attribute data can be defined as the information where the product is good or it’s bad such that we
get to know if our part is good or it’s bad, but we don’t have that continuous variable that we can measure. Then
when we talk about a defect: that’s any characteristic of a product or service that does not conform to our
specifications. But when we talk about defective item it indicates a unit that contains one or more defective
aspects. So it’s important to note the difference, because when we talk about just a defect we’re talking about the
product being good or the product being bad but when we talk about defectives that mean that each product
could have one or more defects. Also when we look at the different types of charts, then, we have our
proportionate defectives that are our p chart and our number of defectives, which is our np chart.

Creating U- Control Chart


The final two types of attribute control charts, are the u chart and the c chart. It is important to understand the
difference and when we would use the u chart versus the c chart. For the u chart and the c chart, these are used
when we have counted individual defects in a unit instead of broadly categorizing the unit as being good or bad,
defective or not defective. So this is where we are using information on defects versus defectives. So when we
use the c chart, it plots the defect counts in an unvarying size sample group, so we use the c chart when our
subgroup size remains constant. But the u chart is used when our subgroup sizes are not the same, because it
plots counts per subgroup. One of the common questions that we should ask when we are trying to decide
whether to use a c or u chart over p or an np chart is by asking can an event happen more than once in this area
of opportunity.

So let’s take a look at how we would go through and create and use the u chart and let’s do this by looking at an
example. So let’s suppose we’re working with a Six Sigma team at an online shopping company and the company
has launched an invoicing process improvement project.

At the final phase of the project the team wants to monitor if the suggested measures have resulted in a process
that’s in control. So we have customers that are sent invoices for their purchases and there are several instances
of incorrect invoice that have been sent to customers in the last six months. And these instances coincided with
a company introducing their online sales. With this, we have six opportunities for defects. This is where it is
useful to make sure that we are looking at the c or the u chart, because we have multiple opportunities for an
incorrect invoice. With the incorrect entries these could be made for the type, name, and price of the items or the
items purchased – as well as the customer’s name, their address, and the total billing amount. So those are our
six possible errors on our invoice. Each of the invoice errors at the retailer is counted as one defect, even if a
single defect results in a defective invoice.

Creating C- Control Chart


Let’s do a recap to look at the differences between the u chart and the c chart. It is important because there are
two broad categories where the u charts and the c charts are different. The first is with the number of defects and
then with the number of observations. When we look at the number of defects for our u chart, we’re looking at the
defect per unit in a subgroup; versus a c chart where we’re looking at the number of defects in a subgroup. The
other key difference is with the observations. When we look at a u chart our subgroup sizes can be equal or
unequal, but for our c charts our subgroup sizes must be equal. So let’s take a look at how we would create and
develop a c chart and let’s to this by going back and looking at our invoicing process that the Six Sigma team is
working on at the online shopping company. At this point the team has decided to change how they’re collecting
their data.

The next step in the process is to calculate our upper control limit, which is equal to Cbar plus the result of three
times the square root of Cbar. And in this case we know our Cbar is 27.067 and so our upper control limit is
42.675. For our lower control limit we now are taking Cbar minus the result of three times the square root of Cbar.
And in this case our lower control limit is 11.459. And it is important to note, as we did with the other control
charts, that if our lower control limit is 0 we set our lower control limit equal to 0 on a chart. The final step in the
process is to analyze and interpret our c chart. The team now has information on the Cbar, the center line, and the
upper and lower control limits.

The formula to calculate the LCL for a c chart, is Cbar minus the result of 3 times the square root of Cbar. If the
LCL is negative, it is zero.

Introduction to TPM
There are a number of Lean tools considered very useful within the Control phase of the Six Sigma DMAIC
methodology that helps us control and maintain the processes. One of the primary tools is Total Productive
Maintenance considered useful within Six Sigma methodology because it helps to really maintain the processes.
The goal of Total Productive Maintenance is to maintain the processes and use it as more of a proactive
maintenance program. The overall goals of Total Productive Maintenance, or TPM, are to really maximize the
effectiveness of the processes in the equipment since we are trying to keep emergency and unscheduled
maintenance to a minimum. And as we do this we also want to make sure that we are boosting our operators and
our employees’ job satisfaction and morale, so that our employees are taking ownership and pride in the
equipment and their roles within the organization. Therefore as we maximize our effectiveness, we are also
removing our deficiencies by working together to improve the maintenance of our operations and as we do this it
helps to eliminate our defects and our downtime.
Six key elements within Total Productive Maintenance (TPM)


Preventive Maintenance: The first is preventive maintenance and this is time based maintenance where we have
maintenances performed on a schedule that’s designed to prevent breakdowns before they can occur.

Predictive Maintenance: The second element is predictive maintenance and this is our condition-based maintenance
where we’re using instruments and sensors, so that we can anticipate when a failure is about to occur or a breakdown is
about to occur, so that we can fix it before the machine fails.

Breakdown Maintenance: The third type is breakdown maintenance and this is repairing the equipment after a
breakdown occurs and so this is more reactive versus our predictive and preventive, which was proactive.

Corrective Maintenance: The fourth type is corrective maintenance and these are ongoing modifications to our
equipment that help to reduce the frequency of breakdowns and also make the repairs easier when our equipment does
breakdown.

Maintenance Prevention: The fifth type is maintenance prevention and this is how we can work as an organization to
design our equipment so that it rarely breaks down and then when it does breakdown or when it fails, it’s very easy to
repair.

Autonomous Maintenance: The sixth element is autonomous maintenance and this is team-based maintenance that’s
done primarily by the plant floor or shop floor operators. When we look at trying to implement maintenance regimes and
improve the reliability of our equipment, organizations face challenges as they try to change the status quo.
In general, within organizations the approach is that we fix things only if they’re broken, so our maintenance is
really reactive rather than proactive. This leads to storing a lot of spare parts with that just-in-case mentality and
we have these large inventories of backup pieces of equipment or spare parts. Because of this the operators
make more signs that they could potentially warn them of malfunctions and so we want to focus and switch the
cultural of our organization from more of a reactive approach to more of a proactive approach. The primary focus
is on getting to more of a zero tolerance type industry when it comes to malfunctions. If we think about
organizations and industry such as the aircraft industry, if equipment malfunctions then we have issues with
safety and regulatory compliance. And so our equipment maintenance is really vital to making sure that we are
meeting our safety and regulatory compliance. There is a big difference between goings from that reactive
approach, to maintenance in the rigors of Total Productive Maintenance. So we want to implement TPM because
it helps to attain a high degree of discipline and certification within our organization.

TPM assists to make sure that we have documentation for every process and every step. And within TPM there
are double and triple verifications for each and every critical process, as well as having audits by regulatory
bodies and third parties to ensure that we have compliance. Here we are trying to do is prevent deterioration of
our equipment and reduce the amount of maintenance that we are conducting. So it’s not just about fixing
equipment, it’s about being more proactive. TPM really extends beyond simple preventive maintenance and it has
a comprehensive management that includes the people, processes, systems in the environment. And so it really
gets to be a coordinated group of activities within Total Productive Maintenance that involves operators sharing
responsibility for the routine equipment inspection, cleaning, maintenance, and minor repairs and it also includes
daily scheduled downtime for maintenance.

The overall aspect of Total Productive Maintenance encompasses total efficiency and the goal of total efficiency
and effectiveness of equipment really focuses on the elimination of failures, defects, rework, waste, and any
losses from equipment related operations. The total also refers to the total package. This is both downtime
prevention and maintainability and our goal is zero breakdowns and zero defects. Primarily organizations are
implementing TPM as part of a 2% goal in reducing original breakdowns and had resulted in a 90% reduction
rework. In which case, Organizations are seeing significant returns on implementing programs such as TPM. And
the third key aspect of total is total participation and this means involving all employees in the TPM program –
that’s from leaders to operators. So having a TPM program it really doesn’t mean that there’s no professional
maintenance staff. The professional maintenance staff still performs the major maintenance activities and they
coach the operators in their routine and minor activities, but we’re getting everyone in the organization involved in
the maintenance program.

Characteristics of TPM
As per a study conducted by a  Japanese plant engineering, even in clean plants more than half of machine
breakdowns are caused by dirt and looseness. Management decided that cleanliness and tightness should be a
key in any system to reduce breakdowns.

Now it is important to understand that how can maintenance workers stay on top of such a huge task? This is
where TPM comes in. TPM focuses on that guiding principle of really shared responsibility. The operators are the
key players, but they are supported technically by skilled maintenance workers. So in effect maintenance workers
can be everywhere at once.

Now as we look at TPM there are really three key characteristics that come from that principle of shared
responsibility.

•First, TPM has the involvement of employees at every level and across all departments, since without leadership even the
best maintenance operators and reliability engineers can’t achieve total plant reliability. TPM requires everyone’s
involvement, so we need to make sure that we have got a program with TPM where we are involving employees at every
level and across departments from top management all the way to frontline employees. Then our cross-functional teams
from these departments include production, product, and process development across the board to marketing and
administration. Of course we still have a professional maintenance staff that has responsibilities, but we’re tying in the
employee participation within the enterprise in making sure it’s comprehensive. And this is one of the key aspects of TPM
that really makes it unique.
•Second, TPM integrates autonomous maintenance into daily routine of the operators. So this autonomous maintenance
includes activities that operators perform that have maintenance functions. And these are intended to keep the plant
operating efficiently. Our operators can perform certain equipment maintenance activities that are closely linked to their
daily operations within the equipment that they normally and regularly deal with. So the focus of the operating team is
typically on cleaning, inspecting, lubricating, monitoring, and other activities such as this that are essential daily tasks that
have traditionally been within the domain of maintenance. But within the TPM system, the operators should have a sense
of ownership in protecting and caring for their machines and then the maintenance personnel can help spend more of their
time on value-added activities and technical repairs; rather than reactive activities such as fighting fires.
•Third aspect of TPM is one of the key characteristics is that it incorporates company-led, small group activities to help
monitor use of TPM throughout the organization. These small groups consist of employees who are continually
controlling and improving the quality of their work, the products, and the services. TPM uses these small groups
autonomously by tapping into each team member’s creativity and this helps to promote mutual development through
training. So they can facilitate a top down promotion of the company’s TPM activities as well as any bottom-up ideas
from the production floors activities. This helps to achieve zero losses by overlapping the small group activities to
identify causes of failures or potential plant and equipment modifications. So let’s take a look at an example how
manufacturing plants can incorporate shared responsibility into the operations. So let’s consider a manufacturer of plastic
injection and rubber moldings. And they make their products for the home appliance industry and to date they’ve really
neglected their preventive maintenance. And so the company was in that classic maintain it till it fails situation.

Now in order to improve the plant availability, the product quality, and the resource utilization, management
decided to implement TPM. So the first step was with the involvement of employees. The TPM committee
emphasized the most important change of all, which was the new plan that was to be carried out through the
positive participation of all those who are concerned and this was taken across all departments at every level.
And they had a suggestion from a small group for reducing part replacement time on the molding machines and
this was implemented as a joint project by engineers from the parts, maintenance, and production departments.
The organization set it up so that now that the operators carry out mandatory daily and weekly inspections. And
this is done without interrupting the production work because the operators can easily prevent breakdowns,
predict failures, and prolong equipment life if they become more familiar with the machinery they use.
And so it was set up such that the operators would know, we know really more of what they would need to do to
keep the machines in normal operating conditions by lubricating them regularly, monitoring their vital signs, and
recording any abnormalities. The third key aspect that the organization implemented was small group activities.
The company expended considerable effort in improving the plant for ease of machine condition monitoring. And
this included creating small groups within and between departments. And these group activities were overlapped
to help promote a closer working relationship between maintenance and the plant operators. This helped to
encourage the operators to become more equipment conscious through focused training. The small groups were
able to successfully help in this training as the operators learned what constituted normal and abnormal
operations in their equipment, and they learned to listen for potential defects or abnormalities within their
equipment before they occurred.

Goals and Benefits of TPM


•Total Productive Maintenance really focuses on the zero loss concepts and what that means is we are trying to get to the
point where we have zero breakdowns; because we’re being proactive enough that our machines don’t breakdown, we’re
doing regular maintenance as it’s needed. And then also zero defects.
•As we reduce our breakdowns and as we reduce the variation from our machines by helping them to operate more
consistently, our output also becomes more consistent. So we reduce the number of defects and this helps us to eliminate
equipment-related failures and waste, and also minimize our emergency maintenance events because our machines aren’t
going down as frequently. And we’re able to perform the predictive maintenance when it’s needed.

Within TPM there are six big losses that contribute negatively to equipment effectiveness –

•The first one is equipment failure. When we talk about equipment failure this results in downtime for repairs, because
breakdowns cause a loss of time and that results in lower productivity. It can also cause quality losses because we might
have defective products associated with this failure. And then the costs associated with the equipment failure include
downtime, the resulting loss production opportunities and yields, as well as labor and spare parts, so it can be fairly
expensive.
•The next big loss is setup and adjustment time and these losses come from equipment changes and they result in loss
production yields that occur during product changeovers, shift changes, or any other changes in operating conditions.
•The third type is idling and minor stoppages and these can be caused by defective sensors, conveyors that jam, and other
items like this that result in slowdowns and losses. And when we have these frequent production downtimes – they can
range from zero to ten minutes in length – but over time they add up. And particularly because they are short, so our
minor and idling stoppages are typically 10 minutes or less, oftentimes they’re difficult to record manually and as such
they’re typically hidden from efficiency reports; but they can cause substantial equipment downtime and lost production
opportunities that really adds up over time.
•The fourth big loss is reduced speed, and this is a difference between design and actual operating speeds and it results is
in losses, in our reduced speed. Our productivity losses occur then when the equipment has to be slowed down to prevent
quality defects from minor stoppages. However, in most cases these losses aren’t recorded because the equipment
continues to operate even though it’s doing so at a reduced speed and it would directly impact our overall equipment
effectiveness.
•The fifth loss are process defects and this is when we’ve got scrap and quality defects. So our process defects can cause
off spec production and defects that are due to equipment malfunctions or poor performance, and that reduces our output
then, because we’ve got…we need to rework our scrap product and that’s waste.
•The sixth big loss is reduced yield and this is typically reflected in wasted raw materials that are associated with rejects
and scraps and they’re typically linked to machine start-ups, changeovers, equipment limitations, and poor product
design. When we look at TPM and the goals how each of those losses would be addressed. We want to make sure that we
start by eliminating or minimizing these six losses. So with our six losses we want to minimize or reduce those as much
as possible because that impacts our overall equipment efficiency and our productivity. So we look at equipment failures
our goal is to have the zero breakdown losses.

Aim of the types of Goal


 
•When we look at setup and adjustment the goal is to keep the changes under 10 minutes.
•With idling and minor stoppages we’re trying to achieve zero losses.
•With reduced speed the goal is to have zero speed losses.
•For process defects the goal is to have zero quality defects losses and with our reduced yield we’re trying to minimize
our yield losses.
•When we look at our losses such as equipment downtime that can bring really a Lean manufacturing operation to a
complete standstill, and so we want to address the failures of equipment within our process because it impacts the flow of
our entire operations.

Several key benefits of TPM


•First we can increase our productivity and efficiency because we’re reducing our downtime.
•We can also reduce our costs and our inventory because we’re not holding additional product just in case equipment
breaks down.
•We are also not holding additional inventory and parts for spare parts in case our equipment goes down and we need to
perform maintenance.
•We can also reduce our accidents and pollution because as we’re working on our equipment or when machines
breakdown these are times where safety is vital and where accidents could occur because we’re not operating under the
standard procedures; its slightly different than what we are typically operating under.
•In addition when we have TPM we’re not doing as much fire fighting and so this really helps to increase the employee
morale.
•Also as we are training our operators to better understand their equipment and take care of their equipment; we are
enhancing our operators and our employee skills.

What is visual factory?


One of the most useful tools in the Control phase of the DMAIC methodology is using the visual factory. We can
define the visual factory as a way to make sure that within the organization everything is very clear. Now while the
name visual factory seems to make it relate more to manufacturing. This tool is increasingly being used in
several service industries and healthcare, hospitality, airlines, and customer service operations.

Then later in this course we will look more closely at the visual factory in the service context where in that context
it’s referred to as the visual workplace. But the idea behind visual factory is making sure that everything is clearly
understandable at a glance and so that we can quickly understand how the operations are functioning. As part of
the visual factory one of the key aspects are visual cues and these are things such as warning signs, process
flowchart, status charts, building maps, indicator lights, color coding, or even simple arrows. These are indeed
very useful in helping to make sure things are simple, succinct, and really effective. At a quick glance we can see
what exactly is going on and what needs to be done. This is where the main benefit of Visual Cues comes into
play is that they’re very efficient because they can convey information very quickly.

Key aspects of Visual Factory


•They reduce a need for repeated verbal messages or costly demonstration because we can quickly see what needs to be
done.
•Another the key aspect of visual cues is that we need to make sure that they’re easily understandable and they’re
consistent across the board.
•We want to make sure that our problems are visible and then we can quickly see when there is a problem. This is where
the visual cues are very helpful because if a light is flashing, it’s very easy to see that something is wrong, and this helps
people keep direct contact with the workplace because it’s a very visual. At a quick glance we can look across the factory
or the work environment to see what is going on within the operations. In addition because it’s visual, we need some sort
of basis for when a signal goes off.
•So this helps us clarify the targets for improvement to know when we need a signal.

Within visual factory there are two main components.

•The first are visual displays, when we talk about visual displays these are a way to impart information and data to
employees within that area. These displays help make the area more user-friendly by answering questions quickly. We can
use visual displays to identify equipment, materials, and locations. We can also use these to describe actions and
procedures or provide safety warnings. This is where we can use labels and signs to communicate whatever are needed
information at that point.
•The second main component is visual controls and these actually control our guide actions. Visual controls are used to
give management workers a visible manifestation of what’s happening at that moment. And so it helps to provide
immediate feedback about the workplace’s condition. So these could include things such as production boards, schedule
boards, tool boards, Jidoka devices, or Kanban cards. And so these are ways that we can illuminate or really highlight
workplace safety production throughput or material flow or any other relevant information. So it gives we a good idea
what the current condition of the workplace is.

Key Tools of Visual Factory

Jidoka – Making Problems Visible


We shall now take a look at some of the key tools of the visual factory and how we could make our problems
more apparent. Visual factory has three key goals and within each of those goals there are appropriate tools that
can be helpful to make sure that we’re achieving those goals.

•The first goal is to make our problems visible and one of the key tools that we can use for addressing that is Jidoka and
Jidoka is automation with a human touch where we can empower our employees to stop the line.
•The second goal is to make direct contact with the workplace, and tools that we can use towards that goal are visual
information systems, Kanban systems, audio signals, or visual production controls. So these all provide information to
make sure that we understand and we have contact with how our workplace is operating.
•The third goal is to clarify what our targets are for improvement and the two of the key tools that we can use to address
this are a visual performance measurement and then also periodic reporting displays. So these both provide visual
displays, so that we can make sure that our targets are clear and everyone is well informed.

So let’s take a closer look at the first tool, which is Jidoka.

Jidoka is automation with a human touch and the power of Jidoka comes from really empowering our employees
and our workers to make them thinkers within our system. So it’s realizing and understanding that our employees
are a key asset in terms of improving our organization. So we want to make sure that our employees are
empowered to think and then also that they are empowered to stop the production line whenever necessary.
Some of the common Jidoka devices include signaling solutions, panel mount alarms, LED round beacons,
hazardous location signaling, and safety relays.

Illustration: Let us take a look an example at how some of these shutdown devices could be used in a Jidoka
system. So let’s suppose we have a metal manufacturing plant and within the plant if the temperature drops
below acceptable levels in a machine that hoops up aluminum, then a red light comes on above the machine.
When the operator or maintenance personnel sees the light, then the machine is stopped, so the employee’s been
empowered to stop the machine and then we stop the machine to actually fix the problem.

Keeping Contact
Another tool considered very useful in the visual factory for helping to keep contact throughout the organization
is use of visual information systems. These are helpful because they help to provide information about the
performance of the process with a quick and easy glance. For example these could be signs or labels or
markings on a floor that show us where the aisles are. These could be tool boards or indicator lights so that we
know exactly how the process is running and it could also be things such as Kanban cards or the information on
the product itself so we can see exactly where things are. And it could also be things such as process flow
diagrams that give us a quick understanding of what needs to happen next within the process.

Illustration: We shall now take up an example where visual controls have been used. We have an air compressor
company that implemented a visual management system. That company implemented tool boards to hold or
mark the place of tools that were required for each workstation. The tool boards visually convey two types of
information, so we know exactly where the tool needs to go based on the shape of it, so we know whether the
tool is there or if the tool is missing. And then when we know that the tool is missing we know exactly what is
missing based on the shape of the tool that’s missing.
Now this saves the operators a considerable amount of time and also helps to eliminate the need to search for
the tool in other boxes or other work areas.

Another type of visual information system is the Kanban system, and if we recall Kanban means signal, and it
provides information because it’s a signal for materials and it helps to create a pull system within the
environment because it just uses a trigger to tell manufacturing when it began. Now this could be something
that’s electronic or it could be a simple manual board where cards are placed when we need something.

Illustration:  Let us take a look at example of a Kanban system that was implemented in a factory that
manufactures lounge suites. One of the components needed to produce the particular side of arm chair is a 10-
inch bolt. These bolts were designed specifically for use in assembly a particular type of chair that’s
manufactured by this company. The bolts are manufactured in a separate area of the factory and they arrive at
the chair assembly station in boxes of 100. When a box of bolts is empty then the worker assembling the chair at
the station takes the card that was attached to the box of bolts, to the bolt manufacturing area. This would trigger
the production of more bolts, which are sent to the chair assembly station.
Therefore even though these two areas might be far away from each other so we can’t visually see what’s
happening in one process from the other process. This Kanban card is a signal that provides information back to
each operation on what’s happening. In addition to Visual Signals we can also have Audio Signals and this helps
keep workers in contact with what’s going on in the workplace. So even though they’re not visual, these audio
signals still communicate information. Also these audio signals can indicate malfunctioning equipment or they
could sound warnings that will alert people prior to the start of a machine operation or when transport vehicles
are backing up among other things. So it’s another way of providing information. Another way that we can help
employees stay in contact with a workplace is for the use of visual production boards, and this helps to let the
operators know and anyone within the organization that’s walking by, what’s going on. So this could be posting of
daily production numbers so that we know what’s happening within each station, and how much work in process
there is, how much was finished, and when things were due.

This could be used for maintenance items or quality problems, but they’re very easy and very visual where we can
see if things are going well – how well things are going. And then teams can also use these at the start of a shift
where a department supervisor might use the boards to help setup what the daily planed activities will be and any
potential problems that have happened from the previous shift or what might be coming down the line. Another
type of visual control are visual control boards and those are used to help indicate when there’s quality issues, if
there’s any machine downtime, cost reductions, any trend charts, Total Productive Maintenance, or 5S activities.
We could also include production or delivery information and checklist work instructions. So the visual controls
can give quite a bit of information. For example we can look at what week of the year it is, what type of defects
we have, and then we could use things with our average ratings which could be in yellow so we could use color
coding. And if we’re running above our control limits we could highlight those in red.

So this gives good information to see how the processes are trending and if we have too many defects in one
area and where we need to be focusing our efforts. And then finally let’s take a look at a visual control board. So
let’s look at a parts manufacturer – its simplifying its production scheduling using the visual signals. So instead
of generating paperwork, the supervisors are using signs to really trigger indicators within their production
processes. So they’re using this as more of a scheduling board, so they can look at their processes to see what is
coming down the line to schedule their production. With this chart they are using a simple system with X’s as the
product is consumed. When the filled in chart areas reach a certain height then the operators are authorized to
produce more parts. And so it tells us exactly where that production point is and that means we need to produce
more of that specific part. So by using something as simple as this everyone in the area can quickly visualize the
part consumptions and the production status.

Clarifying Targets
The next key tool in a visual factory is to clarify the targets and one way to do this is to have visual performance
measurement. It is important to make sure that the workers, and the operators, and the employees in general,
they need to have performance information. Because if they don’t have information or access to the performance
measurements and the tracking information about meeting targets, then they can’t help us as an organization
reach those goals. So we need to make sure the employees have access to that information and they need to be
able to see the performance measurements as it relates to the goals that we’re trying to achieve, versus where we
are actually at. This is really the third goal of a visual factory is to provide the goals versus actual performance
information and with that there are several tools that can be used to summarize the performance measurement
information. For instance we could use status boards, or indicators, or quality control charts, or check sheets.

Illustration: Let us take a look at an example of how we can keep employees in touch with their target. We
consider a paint factory and within the paint factory managers have setup performance boards and the
performance boards help to display information about where they’re actually running and what their targets are.
Within the organization the floor supervisors are responsible for recording the quantity that have been completed
at the end of each hour and then also the cumulative quantity that’s completed at the end of each day. These
performance boards show the current status of operations by looking at what the work in process is and when it’s
due, so we can help the team meet the goals. So we can use boards like this or of many different types that
include information such as the hourly production, days of inventory, cross training data, or the number of
improvement projects that have been completed. With these boards we can also use things like color coding,
where we’ve got green means that we’re on schedule, Yellow, that we’re slightly behind schedule and that red
when we’ve actually missed a deadline. Now it is important just to make sure with the color coding that we’re
using consistent rules for the color coding. So that as we have employees that are cross-trained and move across
different departments, that they understand what the coding means.

TPM in Service Organizations


We will now consider how Total Productive Maintenance can be used in service organizations. Now, when we talk
about service organizations they are typically characterized by intangible processes, they’re dealing with
perishable outputs. Also when we look at the outputs, commonly they’re driven by the customer themselves, so
we’re not always providing the same consistent or the same type of output, so we’re really dealing more with
heterogeneous output. The final key characteristic is that we have the simultaneous production and consumption
in a service environment. Now with what we’ve talked about with the TPM principles and tools so far, these can
be applied equally in service industries. But what needs to be done is we need to be able to connect the service
industries with the characteristics of what type of equipment that needs to be maintained.

For example if we think about the hospitality industry with hotels, fast food chains, and restaurants. There’s still
equipment within each of these that needs to be maintained and if the equipment breaks down or it doesn’t work
then we still have unsatisfied customers. In addition if we think about airlines and customer service operations
such as call centers, help desk, and contact centers; and also functions within our organization such as finance
and human resources, product development, purchasing, and engineering.

Now, these all use equipment that need to be maintained and whenever it breaks down it leads to consequences
with our customer’s whether they’re internal or external. And so it’s important that we think about what our
equipment is within each of these organizations and how we can make sure that we’re taking care of it, to
continue to provide consistent service. Now within the service industry the metric changes slightly. Also within
the service industry we are talking more about the overall performance efficiency. It is a commonly used metric
within the service industry that measures the effectiveness of the people versus the processes. So we’re really
doing within the service industry is targeting value and non-value added factors because we want to try and
reduce the waste within our process. The Overall Performance Efficiency metric, or OPE, helps us to identify and
uncover waste such as waiting waste, changeover waste, retrieving waste, delivering waste, sell motion waste,
and rework waste amongst others. This helps us to determine really what if there are some of our contributing
factors to the ways within our systems and these could be scheduling and other abnormalities, operator
performance, or process performance.

Illustration: Let us consider an example of what TPM that looks like in the healthcare environment. TPM has
considerable amounts of applications within hospitals because there is equipment really everywhere. We have
sterile processing, which has washers and sterilizers. Radiology has x-ray equipment. Surgery has C-arms and
anesthesia equipment and the floor nurses use medication storage machines and those machines at bedsides
that monitor vitals. So all of this equipment is equipment that needs to be maintained and so this ties in directly
with the TPM goals of making sure it’s available. When we look at TPM goals and hospitals – the goals are to
eliminate unplanned machine downtime, increase machine capacity, have fewer errors or failure rates, allow for
minimum inventory, and increase equipment operator safety amongst many others.
Here we are really trying to do within health care then, is shift the maintenance department from focusing on the
use of medical devices to how do we maintain those so that we can reduce our operating costs, have better
uptime and availability of our equipment, and really create a better working environment.

So as we focus on TPM in service organizations there are several potential benefits. TPM leads to a better work
area, reduced administrative cost, and a reduction in the number of files, increased productivity in our support
functions since we’re not waiting for equipment to come back up and running, a reduction in office equipment
breakdowns, a reduction in customer complaints. And overall because we’re experiencing fewer breakdowns, and
we’re able to get our jobs done, and we have access to what we need; it really helps to provide a clean and
pleasant work environment overall.

Visual Workplace in Service Organizations


Now, depending on the type of industry we are working in we might either be working in a visual office or a visual
factory. Visual factory is typically used to refer to manufacturing where a visual office is really based on the
extended notion of the visual factory, but encompasses other types of organizations such as service industry and
the office environment that we might be working in. Also when we look at the visual office it’s similar to the visual
factory, but in this instance this is where employees could see at a glance really what their role is, how the office
is organized, and whether the company or department is achieving its goals.

For instance, we have a small bank that guarantees its customers that they’ll answer their customer phone calls
properly. However, the flow of customer queries at the bank’s call center is fairly unpredictable. Now, the bank
might receive a lot of phone calls at the last hour of the business and so employees are finding it difficult to cope
with the volume of calls and they might have to work overtime to catch up other tasks.

Now, in order to handle this problem, the manager decides to monitor the calls more closely and uses the chart
that posts the hourly updates. These chart helps track what the numbers of calls are and that gives them an idea
of what the average customer wait time. Also if the wait times are getting too long then the manager could use
this information to transfer more employees to concentrate on keep clearing out the call log. So by having this
information in a prominent place within the office that helps the employees and the managers to better manage
their time. And it’s important to make sure that when we’re putting charts like this out in the workplace that we’re
using large text and clear graphics, so that all employees can see at a quickly glance what’s going on. As in this
case where they were able to see quickly what the average customer wait times were.
← Back to Lesson
Statistical Process Control (SPC)
In the Analyze phase of the Six Sigma DMAIC methodology, we focused on determining the factors or those key inputs that
impact the variation and the mean of our output. Here we have determined what our optimal setting should be based on our
different factors to help control and reduce that variation. However, we are always going to have variation within our
process. We cannot completely eliminate variation. But even though we can’t completely eliminate our variation, we can
certainly take steps to identify and control it. And that’s where Statistical Process Control, or SPC, comes into place.
Statistical Process Control helps us to identify when we have variation within our process, and then also find ways to control
the impact of that variation. Statistical Process Control, while it wasn’t called this when it was developed, was developed in
the mid-1920s by Dr. Walter together with the associated tools, and one of the most common Statistical Process Control tools
is the Control Chart.
When we look at the DMAIC methodology,
• We started out by defining our problem, and then really setting as a team what our problem state what should be.
• In our Measure phase, we created the baseline of our data.
• In our Analyze phase, we started to identify what our key factors were.
• In our Improve phase, then we optimize those key factors.
• As we’ve improved our process with our Control phase, we want to make sure that we’re maintaining those gains that
we’ve achieved by looking at ways to control our process over time.
In general we define the Control phase as a process in place for three to six months because we want to make sure that we
don’t have any backsliding within our process, and we’re sustaining those gains. Within the Control phase then we are going
to determine if and when our process is out of control. It is important to understand when we have an in-control and out of
control process.
When we have our processes in control, our mean and our standard deviation are consistent over time and that means that we
are having a consistent output from our process. On the other hand, when we have an unstable process, this is when we have
points that are outside of our control limits. This indicates that our process is unstable because we are not getting a repeatable
output in terms of our mean and our standard deviation. A stable process would have all of its points consistently within our
control limits and it would form a stable distribution over time in terms of the mean and our standard deviation.
Also it is important to note that as a Six Sigma improvement team, there’s a difference between process capability and
process stability. So when we talk about a stable process, we are talking about the voice of the process such that our control
limits, i.e., upper control limit and our lower control limit, are based on the variation within our process. Also when we talk
about having a capable process, we are comparing our process to our specification limits. In this case with a capable process,
we are comparing our voice of our process to the voice of our customer. The key difference is what we’re comparing our
process to.

Objectives of Statistical Process Control


In the Statistical Process Control, there are three key objectives.
• Monitoring the performance of our process
• Identify the variation within our process which includes special and common cause variation that helps to tell us when we
need to take action and when we should not take action.
• When we understand where our variation is coming from we can focus on controlling and improving our process
performance based on how our process is currently operating.

Monitoring the performance of process


When we look at monitoring our process performance, it entails various perspectives.
• The first aspect is, when we determine our process capability, we’re looking at how our process operates, and what that
natural ranges within our process compares to our specification limits.
• When we look at our control chart, our control chart can be developed into a histogram by looking at the variation within
our process.
• Within our control chart, it’s important to note that this again is looking at the voice of our process. Because our control
limits are based on the process itself and not based on the specification limits.
But we can use this information to monitor our process performance to see if there are specific trends within our data that are
signaling to us so we have an out-of-control condition. Then we can also use information such as our run chart and we can
compare our run chart to our actual specification limits from our customer.

Identifying Variation within the Process


Also we look at the variation within our process by looking at our histogram to see how our process is operating. So we can
build our histogram using a simple check sheet to see how our data is operating, and then compare our histogram also to our
specification limits. After looking at our process performance by multiple perspectives, the control chart, run chart,
histograms gives us the information about the centering of our data and our variation within our data. And based on this
information we can make management decisions on when we need to change our process or let it keep running. This causes
two types of variation – special cause variation and common cause variation.

Common Cause Variation: Our process is always going to have natural variation that’s occurring within our process.
This natural variation comes from basic changes within our process such as tool wear or if we are printing out receipts,
we are going to have changes within our variation or changes within our toner.

Special Cause Variation: Now we want to focus on is when we can identify special cause variation such that special
cause variation is when we have out-of-control symptoms. And this is where we compare our process to our upper and
lower control limits to determine when we have points outside of our control limits or several additional rules, such as
looking for upward trends of so many points in a row. Special cause variation is something that can be quickly identified,
it’s also typically called assignable cause, since it is something that when it happens, we can quickly assign a cause to it.
This is clearly up to the person directly involved within the process to fix it. For instance if you are running a piece of
equipment and a tool breaks. That reading from that tool break might cause an out-of-control data point, but it would not
require the action to get management involved. The person directly involved with the machinery would be able to change
the tool.
It is essential to understand the difference between these two types of variation as a Six Sigma professional. As based on the
type of variation we need to determine the type of action we should take. For instance, let’s suppose we adjust a process in
response to common cause variation, which is our natural variation. We could actually be inducing more variations into our
process because of this over control. However in case we fail to respond to special cause variation, then this could lead to
more process variation and making more potential scrap for our customer which is known as under control. This is where it
is important within our Statistical Process Control to identify the difference between our special cause variation and our
common cause variation. And then look at the different types of runs of data and tracking trends.

Focus on controlling and improving process performance


The primary objective of Statistical Process Control is to control and improve the process performance with the focus on
reducing variation. For this the Taguchi Loss Function was developed to look at that loss and the essence of the Taguchi Loss
Function is that, at any time that we have a process that’s not running on target, there is a loss. We can then quantify that loss
and its impact on the customer or the organization. According to Taguchi, he refers it to this as a loss to society because
there’s loss such that we are trying to do is get away from that goalpost mentality. The goalpost mentality states that anything
within the specification limits is equally good and that’s not true. As a customer, there’s a loss associated with the product
when it does not hit the target and the difference in a loss from a minor change shows that the customers would probably be
equally dissatisfied.
But just because there’s that small difference between being on either side of the specification limit makes a difference with
the goalpost mentality of whether the product is good or it’s bad. Chances are that slight difference would still make the
customer dissatisfied.
Within Statistical Process Control, there are several benefits
• We are going through and monitoring the trends with our process, such that we can give that immediate attention to our
assignable variation, since we are able to detect those much quicker.
• This also helps to reduce the need for inspection by being able to monitor our processes more effectively.
• This leads to shorter and more consistent cycle times and it increases our predictability and reliability as we are
improving our processes.
• Then finally this provides better monitoring of our improvement processes to ensure that we are sustaining our
improvements so that we can avoid any potential backsliding from the gains that have been achieved.

Considerations for SPC


As we begin to implement Statistical Process Control, several key considerations need to be taken into account.

Buy-in: SPC requires considerable buy-in from every member of the team. This starts with the operators who are using
the process to make sure that they’re actually going to use a Statistical Process Control and then also the managers who
approve the costs. And when we talk about cost for Statistical Process Control, these costs are involved with initially
developing the control limits. So whether or not this is a paper and pencil process or if it’s an automated process where
the data is automatically uploaded. Either of these have costs associated with them, whether it’s the time of the operator to
plot each data point, or if it’s the cost of buying equipment; so this is done automatically. Also it’s important that
everybody understands the need to investigate the process defects and then also to correct these defects. In general when
we have that recognition of this need from the different levels within our organization, we can move forward quickly and
swiftly with incorporating Statistical Process Control to monitor and improve our processes.

Avoid Over Analysis: Another key consideration is we need to make sure that we are avoiding over analysis. We want to
make sure that when we’re doing the appropriate level of analysis, if we start over analyzing this leads to limited
accomplishments. For instance, we don’t want to put together a control chart for every single process and we don’t need a
control chart for every single aspect of our process, or product, or service. Here, we need to start by focusing on what
those key process aspects are.

Focus on Actual Process Issues: The third key consideration for SPC is that we need to understand that with Statistical
Process Control, we are not focusing on the team member aspects. But we need to focus on the actual process issues.
Thus by focusing on the process issues, we are able to understand where the process variations come from. The goal of
SPC is not to intimidate and reprimand our operators. Here we wish to ensure that we are actually empowering our people
to make decisions based on how our process is operating. So we want to actually give our process owners the control over
the process, so that they can record the behavior of the process. We also want to make sure that we’re giving them the
authority to change the process as needed.
Now we need to understand that SPC is a toolkit, and therefore it does have certain limitations.
• Indeed, SPC is very useful because it provides immediate quantitative feedback on our process behavior. Therefore rather
than waiting and seeing that we have something happening within our process, we are able to see that immediately and
then make changes as needed. But it is crucial to note that SPC is an indicator of our process performance, it’s not a short-
term cure all. Also it’s also not a complete quality assurance program, and so SPC does not fix our problems. What it does
is it just provides an indicator that something is happening within our process that we need to investigate.
• The final consideration is we need to think about how control charts are really the primary backbone of SPC. When we
look at control charts, our centerline is the mean of our data and then our upper and lower control limits are a way of
representing the variation from our process. Now they are calculated based on the data and this is why control charts are
considered to be the voice of the process.

Working with control charts


The primary purpose of using a control chart is to determine the process stability such that we’re trying to
understand if we have a stable process over time. So when we talk about a stable process that means our mean
and our standard deviation should be consistent over time.

There are four key reasons for using a control chart.

•The first is to monitor our process. We want to see if our process is stable over time.
•To find root causes of special cause variation. If our process is not stable over time, then we can pick up on when we
have special cause variation. And then based on that we can determine the root causes of why that’s happening and make
further process improvements.
•In addition because we should have a stable process over time that’s consistent in terms of mean and standard deviation,
we can use this information then to predict a range of our process outcomes.
•Finally we can also use control charts to analyze our process variation patterns over time and take that back into our
special cause and common cause variation to identify what our patterns are within our process for further improvement.

We shall now discuss some of the key parts of our process control charts and how they link into some of our
assumptions.

•The first aspect is our target or our centerline. This is the equivalent of our mean of our process, whether it’s the Xbar or
µ. Then our control limits are based on the voice of our process. Now these come specifically from our data and we’re
able to calculate our upper control limit and our lower control limit. Now when we look at our control chart, the
information that we are getting gives us information on our central tendency and our dispersion and our spread. Then we
are able to relate this spread of our data to our actual control limits. Now it’s important to note again that our control
limits are not equivalent to our specification limits. Our control limits are based off of our data itself and our specification
limits come from our customers. Or it could come from regulatory agencies, but these are given to us where our control
limits are actually calculated based on the process data itself.

One of the key assumptions for our control charts is that they follow a normal distribution. And when we
calculate our upper and lower control limit, these are essentially equal to our mean plus three standard deviations
for our upper control limit, or our mean minus three standard deviations for our lower control limit. This means
that we’re accounting for about 99.7% of all of our data points within our control limits. So anything that falls
outside of that is considered special cause variation.

Creating a control chart


•In the first step of the process, we’re plotting the process or product data of interest itself. For instance, this could be the
average weight of a bag of chips or the number of escalated calls in a helpdesk shift.
•The second step then is to draw a line to connect these dots, and this gives we time ordered series information.
Essentially at this point, what we have is a run chart.
•Then to take this into a control chart, we start by drawing the center line which is also the mean of our data.
•Then the fourth step of our process is to draw in our upper and lower control limits. These limits relate to our process in
terms of any values within our upper and lower control limit represent the normal amount of variation that would provide
a consistent process over time. Now these are calculated by taking plus or minus three standard deviations from our
centerline. Then any value that falls outside of our lower control limit or our upper control limit range is considered
excessive variation. It attributed as something other than normal or normal variation than what we expect to see within
our process.

Types of Control Charts


There are several types of control charts and it is very important as a Six Sigma professional to have an
understanding of when to use which type of control chart.

On the basis of type of data


The first decision is based on the type of data and our control charts will be selected, whether it’s variable or
continuous data and whether it’s attribute or discrete data. Now when we are considering variable or continuous
data, this is based on our quality characteristic and whether or not it can be measured and expresses numerical
data like if we have length, volume, temperature or time then it is discrete data.

•If we have variable data, we would select the appropriate type of variables charts. Now if we are picking only one
sample at a time, then we would use the Individual Moving Range (ImR) chart. However, if we have multiple samples,
then we would pick the Xbar and R chart or the Xbar and s chart.
•For our discrete data, this would be data for product characteristics that can be evaluated with a discrete response. For
instance, it could be pass or fail, yes or no, good or bad. Then what we are typically plotting is the number of proportion
of defects or the number of defectives.
•If we are looking at count data, we would use a c chart or a u chart.
•If we’re looking at classification data, then we would use an np chart or a p chart

We shall now discuss each of these charts in more detail. The first chart is the Xbar and R chart and this includes
two different charts. They are typically represented with the Xbar chart on the top and the R chart on the bottom.
With the Xbar and R chart we’re typically using this when we have samples of less than or equal to nine. For
instance, if we had a process where every hour we are pulling five parts, and if we look at our first hour, we would
pull five parts and we would take the average of those five parts and plot it on our sample mean chart, or our Xbar
chart. Then in our second hour, we would pull another five parts and we would take the average of those five
parts. For our range chart, what we’re capturing then is with our five parts that we pull in that first hour, we would
determine the range of those five parts. And then our second hour, we would pull another five parts and we would
determine the range of those five parts.

Our next chart would be our Xbar and s chart. Now this is typically plotted with our Xbar chart on the top and our
s chart on the bottom. The Xbar chart would be calculated the same way we did with the Xbar and R chart. The
difference here is when we use an Xbar and S chart we’re typically using this when our sample is greater than
nine. We are doing this because when we have a sample size greater than nine, our standard deviation is a better
representation of our variation. So with our Xbar and s chart, our Xbar would be calculated the same way we did
for an Xbar and R chart. Let us say for example, we could pull a sample and perhaps we have 15 samples. We
would take the average of those 15 samples and plot it on our Xbar chart. Then for our s chart, we would take the
standard deviation of those 15 samples and plot our standard deviation on our s chart. The next type of chart is
our ImR, or our Individual Moving Range Chart. Also we’re using our Individual Range Chart when we’re only
pulling one sample at a time.

This chart is represented by two charts as well, where we have the mean on the top chart, which is similar to an
Xbar, and then our range on the bottom chart. However, with our individual mean since we are only pulling one
sample at a time, when we pull that one sample we’re plotting the value for that one sample, and we will continue
to plot each sample. Then for our Range chart since we’re only pulling one sample, we’re determining the range
from one part to the next part.

Therefore our first value is actually blank, our second value that we are plotting is the difference or the range from
our first part to our second part. So we are plotting the difference between each of those subsequent readings.
With our Attribute charts, we have four different types of charts. With our c and our u charts, these follow a
Poisson distribution. And what we’re doing is we are counting the number of defects instead of categorizing
them as defective or non-defective. With our c chart, we’re plotting the number of defectives per subgroup or
sample. For example, we can measure defects by day, batch or machine. And the c chart is used when our
sample size is constant.

Now with our u chart, we are plotting the average number of defects per unit, and because of this our sample size
is not constant. The key difference here is we’re assuming that a part could have multiple defects per part, and so
we are taking that into account. We notice with the u chart that the control limits are not straight lines like they
were in the c chart and this is because our sample sizes are varying. When we look at our np chart, we are
plotting the number of defective or nonconforming units. We are using this when our subgroup size is constant
and because the subgroups are equal in size, converting defective counts into proportion isn’t really necessary.
That’s where our p chart comes into play. Our p chart is used to plot the proportion or ratio of defective units. And
we’re using this when our subgroups are not of equal size, so our subgroups are not constant. And this is why the
control limits also do not have straight lines. This is used to signify that we’re using varying sample sizes. And in
particular, the p chart is useful when we have a sample size of 50 or more.

Common and Special Cause Variations


Now let’s take a closer look at the different types of variation on a control chart. When we’re talking about
common cause variation, this is a natural variation that occurs within our process. These are things that would
affect the entire process and not just be specific to one type of machinery or one step within the process. For
example, poor maintenance on machines, or poorly written standard operating procedures, poor work
environments. If we think about lighting, temperature, ventilation, and this is normal wear and tear within our
processes. However, special cause variation occurs when we have something specific that happens within our
process. And it’s also commonly referred to as assignable causes because we can typically pinpoint what
happened within our process. Examples of special causes include faulty controllers, or the machine malfunctions,
or we get a new batch of raw material that’s a poor batch of raw material, a part breaks, or there is a power surge.
And so there is something specific that happens that we immediately see a difference in the output. When we
have common cause within our process, we would have a distribution over time that has consistent mean and
standard deviation.

When we think about the common causes, this is that normal wear and tear on our machines. Its natural changes
in input material or operators have fatigue over time. And so we’re not seeing specific trends in terms of changes
with our mean and our standard deviation. Since this would be the naturally-occurring variation within our
process. However, with special cause variation, this is when there’s a specific change that happens within our
process. We can pinpoint or assign a cause that happens within our process. For example, we could have a
conflict with existing software or the process might have slowed down because of a change in the software
program. We could have a lack of training and skills that happens amongst a few of our operators or agents
within our processes.

Now when we look at special cause variation, in and of itself it doesn’t necessarily make the product or service
defective. In order to know that, we need to compare our variation to our upper and lower specification limits.
Now it’s important to note that our special cause variation is what causes our process to be unstable or out-of-
control. And we only know by comparing our process with to our specification limits to know if we actually have a
bad product or if we have economic losses by not meeting the target.

Now within our process, we need to compare our overall process width to our specification width. So it is
essential to note though that if we have an out-of-control process with assignable causes or excessive variation
then we have a greater chance of producing defects. It is that unpredictability that comes from our special
causes that leads to lower quality goods and services and more rework and scrap and waste within our
processes. So what we want to with our Six Sigma projects is control the special cause variation and ensures that
our process is stable over time and this helps to minimize the potential of quality costs. So now let’s take took a
closer look at some of these patterns that show when we have special cause variation within our process. One
example is when we have an upward trend or we could have a downward trend within our process. This is a sign
that we have a lack of randomness within our data. We would expect, based on natural variation within our
process that our process would have points where it goes up and then down.

Another indicator of when we have special cause variation within our process is when we have a spike within our
data. Now those spikes could be outside of our control limits. Another sign that we’ve special cause variation is
when we’ve cyclical type data. Now we would expect some sort of randomness within our process. But we would
not expect a cycle where our process goes up-and-down, up-and-down, up-and-down.

Then the fourth sign of special cause variation is when we have a shift within our data. We would expect to have
points within certain ranges of our control limits. And we would not expect then over time that we would have a
shift in our data where our data points fall within a narrow range around our centerline. Now it could be that we’ve
made changes to our process. And in that case we would need to go through and recalculate our control limits.
However, with a normal process, we would expect variation right around our centerline.

Choosing the correct control chart to use in different scenarios


Attributes charts

Data type Nonconformity Sample size Chart type


Attributes chart Defects c chart
Constant sample size
(discrete data) (number of nonconformities) (number of defects)
u chart
Varying sample size
(defects per unit)
Defectives np chart
Constant sample size
(number of nonconforming units) (number of defective units)
p chart
Varying sample size
(percentage of defective units)

Data type Sample size Chart type


Variables chart
(continuous data)   Sample size = 1 ImR chart

  Sample size = 2-10 X-bar and R chart

  Sample size > 10 X-bar and s chart

Variables for Statistical Process Control


In Statistical Process Control it is essential to choose the right number of variables. There are several ways we can go about
selecting our variables. We could have variables that were part of our DMAIC process improvement project or we could
have variables that need to be monitored as part of just an SPC activity that’s done independently for project, to monitor and
analyze those variables because those processes are currently not in control. It is very essential to make sure that we are
selecting the right variables since if we select too many variables then that leads to wasted time and wasted effort, and we
want to avoid that as much as possible. But if we select too many variables, then we can’t focus on the ones that are really
critical thus reducing the benefits decrease and our costs increase.
Also it is important that when we are doing the Six Sigma project, those variables that we are trying to improve will typically
naturally come into the Control phase where we could perform Statistical Process Control. However, when we are running
other process improvement efforts, we want to be careful that we are selecting the right variables based on what our
customer’s needs and expectations are.
In addition if we are running these long-term Six Sigma projects or continuing activities, it’s important that we might want to
actually broaden the net on these long-term projects, to make sure that we’re looking at the number of potential variables that
are important as part of the specific DMAIC methodology, but also to control the overall process.
Now there are four key categories of variables that are important to take into account when we’re selecting the right
variables.
• The first category is those variables that are difficult to control. For example, if we have a variable that’s associated with a
high defect rate, we’d really want to focus our efforts on reducing those defect areas by controlling our process through
Statistical Process Control. Another example of a process that would be difficult to hold is a process that has considerable
variation. And so we would want to be able to look at our process and understand the variation from that process in terms
of special cause and common cause variation.
• The second category is those variables that are tied to customers and what their key imperatives are. So we want to make
sure that we’re taking into account those variables tied to customer, organizational, or regulatory imperatives. When we
talk about customer complaints these are things that the customer would be unhappy about with the product or service.
And so that’s a variable we might want to consider using for a control chart. In addition, if we have a specific customer
request. If the customer is requesting a key aspect of the product, then that means it’s something that they are particularly
interested in. And so that should be an area of focus. For instance, if we have a cellular phone manufacturer, they might
know that their customers don’t really care about multiple ringtones, but they do care about battery life. And so the
battery life might be something that we should focus on for a control chart.
• The third aspect is standards so if the organization needs to adhere to applicable organizational or regulatory standards,
then we might want to use control charts to make sure that the process is stable. For instance, if we have a company that
wants to expand their organization and build on a new premises, they need to make sure that the new building meets
environmental standards and so that might be important to the company. Another aspect to take into account when we are
selecting our variables for Statistical Process Control is to look at the critical dimensions of the product or process that’s
under consideration. So when we look at critical dimensions, these could be things that affect human safety or the
environment. Since there could be adverse risks, we would want to make sure that we’re closely monitoring these.
In addition, if you are looking at a product our customers are typically buying our product for a specific use. If that use
degrades for any reason, then the customer is going to be unhappy with the product. So we could conduct a Statistical
Process Control on those key aspects that tie to the product’s use. Then in terms of reducing risks, anything associated with
processing failures or aspects of the product or service that cause high internal costs could be important. Using that we would
want to tie into those variables that would help control our process. Another aspect to consider is those salient or known
variables. So these would be variables that we know based on past historical data that they usually exhibit special cause
variation. We could also monitor our root causes that if we know what our true causes are, and then we could conduct
Statistical Process Control on those roots causes. Now as we do this, the variables would be measured by the person that’s
actually doing the charting. So they would have insight into how the processes are operating. Another good variable that we
could track are variables that are leading indicators. And the leading indicators are important because they help give us
information about what our output of our process will be.

Selecting Variables
Now that we have discussed the criteria on how to select variables for Statistical Process Control, let’s go through a scenario
to illustrate how we would use those criteria. Let us assume for this example that we’re part of a Lean Six Sigma project
team at a toy manufacturer. Over the past few weeks, we’ve received several complaints from customers about the brittleness
of certain motorized toys and their inability to maintain shape after only a few weeks. This is concerning to the company and
management in particular. Their concern centers mainly on the customer complaints and the excessive variation within the
products. Within the organization, cost control and customer satisfaction are key. So the company and the team find that the
variations mostly occur in the toy’s base structure as it goes to the molding and stamping process. We are able to determine
that customer complaints about brittleness and shape also relate to the base structure. Using this information the team is able
to identify 15 probable input and output causes and variables in the process. Each of these could be related to variation. As a
team we are able to eliminate seven of these variables at the primary stage, because they are not critical to product quality or
customer requirements.
Now the eight remaining variables are the quantity of liquid plastic that’s poured into the die, the die temperature, the liquid
plastic temperature, liquid plastic density, the injection pressure, cooling time, material elasticity, and then the tensile
strength of the base structure. Now it is important to note that since there were eight variables, that would be too timely and
too expensive to conduct Statistical Process Control on all eight of those variables. So the team needs to narrow down the
variables to focus really on those variables that are critical to cost, variability, and customer satisfaction. As a team we start
digging into each of these characteristics more deeply. Now as a team we determine that the brittleness is directly associated
with the liquid plastic temperature. Since the base structure is brittle as it comes off the mold, when standards are not
followed. The team was unable to prove that the liquid plastic density could also be directly related to the base structure for
having low tensile strength. However, considering its significance to customer satisfaction, the team decided to monitor this
variable separately.
As part of the process of looking at each of these characteristics, the team talks to the production manager. The production
manager reported to the team that the material is often discarded due to the liquid plastic overheating or the liquid plastic
density straying away from the technical standards. So as a team we were able to note that the quantity of liquid plastic
poured in the die is controlled manually. And that leads to a considerable amount of rework and waste in the process. Then
finally as a team we conduct a factory analysis with a large sample and this looks at other variables such as cooling time, die
temperature, and injecting pressure. And it’s able to determine these are not major causes of variation. So as a team we are
able to narrow this down to four key variables that we should monitor. Those include quantity of liquid plastic poured in the
die, liquid plastic temperature, liquid plastic density, and tensile strength of the base structure.
Let us take a look at why each of these variables was actually chosen. When we look at the liquid plastic temperature, this
was chosen because it’s a key process variable and it impacts the products. And items must adhere to applicable
organizational or regulatory standards. The next characteristic that was chosen was the liquid plastic density. This was
chosen because it’s a key process variable that impacts the product and it’s also one of the major sources of customer
complaints. The next characteristic chosen was the quantity of liquid plastic that’s poured in the die. It was chosen because it
contributes to high internal cost, and this is a process that runs at a high defective rate. Then finally the fourth characteristic
is the tensile strength of the base structure. It was chosen because it’s a variable that’s known to exhibit a lot of variation and
it’s also a major source of customer complaints.

A key aspect within Statistical Process Control is to determine what our subgroups; this is where rational
subgrouping comes together. When we talk about rational subgroups, what we are trying to do is make sure that
we can identify special cause variation in our data using our control chart. In order to do this we need to have
data that’s as representative as possible and that way it helps us to identify where the variation is coming from.
We can do this by collecting a set of subgroup samples. For instance subgroup A, B, C, and D and each of these
individual subgroups are collected under similar conditions. This provides us with rational subgroups that are
small homogeneous samples because they are all collected under the same set of conditions. They are taken
then in a short space or time and so that helps them to make sure that the data is collected under the same or
very similar conditions. Here, what we are trying to do is contain the within subgroup variation, and that’s what is
called rational subgrouping. This is useful within Statistical Process Control because it helps to limit the
variability that’s inherent to a particular subgroup of samples within that subgroup.

By doing this we are able to observe the special cause variation that’s happening within our process. Also if we’re
able to collect our subgroups properly, then we’re only getting the variation that comes naturally within the
process, and that helps to identify any unusual or our special cause variation. Now within SPC, we are using our
subgroups so that we can separate the within group and between group variation within our process. When we
are talking about our within group variation, these are our individual samples from that subgroup. We are
calculating the variations within that individual subgroup. To minimize that uncommon or unnatural variability
that comes within the same subgroup, it’s important to make sure that within each of these subgroups that we’re
collecting information as consistently as possible or under those same set of conditions. For example, if we look
at a die cut manufacturing operation that produces 50 parts per hour, the operator would measure four randomly
selected parts at the beginning of every hour. Then each sample of those four parts is a subgroup.

Then once we have our subgroups, we can look at the variation between each subgroup. That’s looked at based
on the variation coming from each hour of the process. Now we have talked about how control charts rely on
rational subgrouping of the process data. When we look at our upper and our lower control limits for our control
charts, those are calculated using the process variability that comes from within the subgroups. It is important to
make sure that we are selecting our subgroups so that we’re only getting a common cause variation within that
process. Once we have our upper and lower control limits identified, then we can plot the between subgroup
variability. It is essentially our line that connects each data point on our control chart. Then we can use that
information to see if the variation corresponds to the same level of variation as within subgroups. If they do not
then we’re saying that our process is out of control.
The goal with Statistical Process Control is to reduce our excessive variation. And we can do this by eliminating
between subgroup variations, which is our special cause. And by reducing the within subgroup variation, which is
our common cause. So now let’s take a look at why sampling is needed in Statistical Process Control. Now
remember the goal of Statistical Process Control is to identify our special cause variation. And to do that we need
to make sure that our subgroups are free of our special cause variation. Once we’ve identified our special cause
variation, then we want to compare our within group variation to our between group variation. There are three key
requirements for collecting our subgroup samples. The first one is that our observations must be comprised of
data points that are independent. And then second, our observations that are within group are from a single
stable process. And then finally when we look at our subgroup samples, these subgroups are formed from
observations that are taken in a time-ordered sequence.

Process of applying Rational Subgrouping


The primary rules about choosing subgroups are to make sure that we are taking subgroups from different times,
different locations, different suppliers, and different operators. This helps to show the differences between these
types of variables. Another key rule is that in order to create meaningful control charts, we need to make sure that
our subgroup samples were produced under similar conditions. And this helps as we set up our process control
chart because what we’re trying to do is ensure that while variation between subgroup samples is attributable, it’s
typically related to natural or common causes only. When we look at between subgroups, we are capturing all the
possible variabilities within our system. Let’s take a look at an example of how we could apply rational
subgrouping within the service industry.

Illustration: We have a cereal manufacturer that wants to reduce variation in their box filling process. Within this
process there are three different filling machines. We have an operator that’s taken some initial data on the
process and they’re trying to decide how to organize it into a process control chart. The operator has collected
data using 15 different samples. And the characteristic under consideration is the weight of each box. The
operator then wants to obtain rational subgroups for an Xbar and R control chart, but they are not really sure what
that is. So the operator works with the Six Sigma team and they have several suggestions that emerge from this
process. The first option is to consider all 15 data points as one subgroup and collect more subgroup data the
next day. Now we wouldn’t want to do this because it goes against the rational subgrouping practices that we’ve
discussed so far. The second option would be to organize the data into subgroups of three taken each half hour.
The problem with this approach would be that we wouldn’t be taking into consideration the variation from all
three machines.
The last option would be to treat each of the machines as a rational subgroup of five. This would be the best
option because we’re able to take into account the three machines separately. We are also able to get
information over the time the sample was taken. So now let’s take a look at some of the benefits of using smaller
subgroups. When we will look at large subgroups, they may contain dependent data or special cause variation.
What we want to do is make sure we’re collecting information under similar conditions. We may not be able to
capture that with large subgroups. In addition when we use smaller subgroups, it’s much more economical than
collecting multiple parts and large subgroup sizes and having to inspect each of those. In addition when we talk
about collecting small subgroup sizes, these can be assumed to be rational. And with small subgroup sizes, we
can detect small shifts where when we have large subgroup sizes that might be more difficult to detect.

Now the issue though is if we have small groups, we may not be able to detect large shifts within our data. Then
typically when we’re talking about service processes, we’re typically talking about a sample of one because we’re
able to gather information on that one service encounter. Now another consideration that we need to taken into
account with our subgroup size is that we need to make sure that we have sufficient history of the process in
order to create our control chart. Our control chart calculates our upper control limit, our lower control limit, and
our centerline based on current data. And so we have to have enough data observations so that we can calculate
those reliable estimates of our average and our variation before we start a control chart. In addition we can’t
distinguish between special causes and common causes if we don’t have enough subgroups to define the
common cause within the operating level of the process.

Characteristics of Control Plan


In the Control phase of the DMAIC methodology, another commonly used methodology is tools of control plan.
The control plan is a document that’s in a tabular format that reflects the current methods of control in the
measurement systems. It is a way to make sure that we’re monitoring and controlling the processes through the
measurements and inspections and that we also have reaction plans in place. A control plan can be used for
either a single process or a family of processes. It is it’s something that’s owned and updated by the process
owner. Essentially it takes we through the process where we can see if certain things happen – what is that plan.

There are several key contents that are included within the control plan.

•The first are the characteristics of the process. In addition the control plan captures the process history and all the process
improvements that have occurred. One of the key benefits of using the control plan is it takes into account the
measurement and inspection. And then if something does go wrong, there are reaction plans or contingency plans on how
we need to handle that situation when one of the critical characteristics do not meet the specifications.
•The control plan also includes information on the frequency and the method of how we’re measuring and evaluating the
critical characteristics.
•In addition the control plan includes information on the roles and responsibilities of the team members and key indicators
for how we’re going to monitor the control of the process.

There are four key factors that affect the complexity of the control plan.


Type of Business: The first is the type of business. This determines in a large part how a control plan will be designed.
Now while there are standard templates that can be used, this really ties into the complexity of the product and the process
and the business. For instance, if we’re controlling a tangible manufactured product that will involve entirely different
measurement tools and techniques as opposed to measuring customer service results at a call center. Or determining the
speed of transactions for a retail provider at a point-of-sale.

Nature of Process: Another factor that affects the control plan is the nature of the process. If we have a simple process, it
might require fewer control elements than the more complex operations. Controlling the final measurement for a single
part, for example, probably requires a different level of complexity than controlling a fully assembled jet engine. And so
that would need to be taken into account.

Voice of the Customer: The third factor is the voice of the customer. Even if we have the same product and the same
process, we may have different customers that require different needs and expectations. That might lead to diversion
control plan. For example, wholesale customers might place a different emphasis on shipping and packing items than
what retail customers would place on that same product.

Feasibility of Implementation: The fourth factor would be the feasibility of implementation since we have different
levels of complexity, we might have some situations that require hard quantitative data and rigid controls and highly
detailed documentation. And in other situations, we might have more subjective evaluations. So the process scope, and
the financial and physical resources available to the Six Sigma team, may affect how the plan is structured. There may not
be time and money available for more complex detailed measurement systems to be used more frequently.
There are three phases then in creating a control plan.


Identifying the Process Scope: The first phase is identifying the process scope and this is where we define the purpose
of the process, its boundaries, and its intended use. And we would also translate the voice of the customer into
quantifiable measurable objectives.

Determining the Indicators: The second phase is determining the indicators and responsibilities. And this is where the
team would be involved with designing the control process to make sure we’re meeting their critical customer
requirements. And these are commonly referred to as the CCRs.

Review the Control Plan: In the third phase, the team would be reviewing the completed control plan. This is where we
would define the rest of the elements that would be included in the control plan. And the team would need to keep in mind
that each control plan has its particular needs. So we need to evaluate the finished control plan, perform a potential
problem analysis and then document any lessons learned in this final phase.
Now because the control plan contains a considerable amount of information, the team would need to pull
together various sources of information to complete the process control plan. And so some of the sources of
information could include process flow diagrams, a system level Failure Modes and Effects Analysis, a Design
FMEA or a Process FMEA, historical data on the process, the knowledge from the team on the process, designed
experiments, design reviews, multi-vari studies, and lessons learned.

Control Plan Elements


We shall now discuss in detail each of the sections of our control plan more closely and how we would complete
those. Now it’s important to note that with our control plan, even though we have a basic template that would be
used, our control plans are going to vary greatly from project to project, just depending on the type of situation,
the nature of our business, the voice of our customer, and our feasibility of implementation. While we go through
each of the main elements, it’s important to remember that they might vary greatly. Now typically at the top
portion of our control chart, this is where we include more of the administrative information. This gives us more
information about what the product is and what we’re trying to accomplish.

•The first part of it is our control plan title, this is included so we can distinguish this control plan from other documents,
such as our operating instructions or our Six Sigma database.
•The next piece of information is our reference number, this needs to be a unique reference number for our plan. It’s
sometimes called our control number and it might be supplied by the responsible party. Another key aspect is our page
number. By including the pagination we can make sure that we have all the documentation necessary and none of our
pages get out of order.
•The next aspect is our team members and this is where we would list all of the team members that participated in
creating the control plan.
•The next two pieces of information are our original date and our revision date. We would want to include the original
date and this is a date where we first developed the control plan. Then all of our subsequent revisions would be included
with the revision date that would provide information to make sure that we’re working off the latest and most current
control plan. Other information in the header of the document is the process owner. Note, the process owner is the person
that’s ultimately responsible for monitoring and execution of the control plan. We also need to include the process
description. This gives us an overall description of what the process is for this control plan and what we’re documenting.
And then finally we have the process customer. Here we would list who the recipients are of this process output, because
the objective of this process is to make sure we’re meeting their input requirements.
•The next step in the process is to include information on the specific control indicators. With this information, we would
want to include what our critical customer requirements are. These are those specifications of the process that influence
how the process operates. We would also want to include information on the outcome indicators.

This is where we would be able to demonstrate the degree to which the process improvements have resulted in
lasting change. In other words this is how we’re going to evaluate how well our process meets our customer
requirements. Now that that information and the header of the document are completed, we would start filling
out information within the body of the control plan. The first step of the process would be to list each process
step and then we would fill the information on the key process input variables or the key process output variables.
Now it is important to note here that we would only have one of those. We would have a key process input
variable or a key process output variable. The next piece of information would be our process specification. And
within our process specifications, we would include the target and our upper and lower specification limits. And
these would be provided by our customer. The next piece of information would be our capability. Based on
historical data, we could determine what our process capability is. And we could calculate information such as
our initial CPK, the date it was taken, and our sample size for that CPK value. And then next we would look at our
measurement system.
For our technique, we would capture what our specific gauge was for our measuring technique. And then we
would include information on our Gauge R&R. So we would know how accurate our measurement system is. And
then the final six fields within this table start with the method. The method is how we’re going to control our
variables, whether we’re using control charts, checklist, visual inspections, or automated measurements. The
Who would provide information on who is responsible for collecting that information. The Where and the What
would be where in the process this information is being collected, and what information is being collected. The
When would capture information on our sampling frequency. And then our reaction plan would include
information on when our process is out of control or if we have a product that begins to fail inspections. This
would be the actions that we would take at that point.

Then finally there’s a place on the control chart to include miscellaneous information, or notes. And this could be
anything that we needed to take into consideration as we’re monitoring our process. Once we’ve our control plan
complete, there are two key categories of additional information that are often included with our control plans.
And those are graphical elements and our lessons learned. It’s useful to make sure we’re including graphical
elements along with the control plans because it helps to provide more detail about the specification limits and
the capability figures that were recorded in the control plan. And by providing these images that help to give more
in-depth visual performance history about the process by including those summary statistics. And then also to
make sure that we’re not losing our quality improvement gains, the control plan should also document what went
right and what went wrong based on our lessons learned. And that’s very useful information to take into our next
process improvement project.

Transferring Responsibility
Once we have created our control plan, it’s important as a Six Sigma team to make sure that we have an effective
handoff of our control plan over to the process owner. Now when we hand over our control plan, we’re doing this
because we’ve completed our project and it’s time now to transition this to the process owner, because they’re
the ones that will have ownership of the process. In addition there’s going to be personnel changes. While we
have handled it as a Six Sigma project, it’s now time for that new process owner to take over. And it’s important to
make sure that this is a smooth transition. Because if our control plan is not adequately maintained during the
transition, then the improvements that were achieved during the Six Sigma project are going to deteriorate and
our process will revert back to the old way of doing things. Now there are several requirements when we talk
about the handoff. The first it’s to make sure that the control plan is integrated. As a Six Sigma team we need to
re-examine the control plan in light of how it’s integrated with other related processes. For example, if we have an
order fulfillment process that must interface with the order taking process on one hand, we need to make sure
that on the other hand it’s also interfacing with the shipping department. So our control plan needs to make sure
that we’re allowing for a smooth interaction between both of those processes. The second aspect is securing
understanding and agreement.

The process owner and their employees need to understand what’s involved. And they need to accept
accountability for controlling the processes. Most likely these people have been consulted with and involved with
in the process improvement project. And so these have been people that we’ve talked to already and they’ve also
probably helped develop the control plan. Moving forward though, the Six Sigma needs to make sure that they are
fully trained in all of their responsibilities, that this training is effective, and that the team members and the
process owners are on board with this program. The third requirement is establishing the documentation
processes. And this is where we need to make sure the methods that were involved in the control plan were well-
designed, clear, and easy to use. This ensures that the people that are going to be performing the monitoring can
effectively document the control processes, both in written and graphical forms. And then the fourth aspect is to
update the work instructions with clear directions. The team needs to ensure that all of the work instructions,
including the documents that are inside and outside of the area, have been updated with adequate and clear
reaction plan instructions. And this helps to ensure that everyone’s on the same page with what they need to do
when a nonconformance or instability is identified in the process.
Developing Control Plan

We complete all steps of the control plan development process and to ensure all appropriate information is
included in the plan.

Elements of a control plan checklist

Item Checked
Control plan (title)  
Reference number  
Team members  
Process owner  
Page  
Original date  
Revision date  
Process customer  
Process description  
Critical customer
requirements  
Outcome indicators  
Part or process step  
KPIV inputs  
KPOV outputs  
Process specification  
KPIV inputs  
KPOV outputs  
Capability  
Measurement system  
When (frequency)  
Where/what  
Who  
Method  
Reaction plan  
Notes/misc.  

CONTROL PLAN MANAGEMENT CHECKLIST

Element Ensure that Checked


The quality, quantity, and detail of the process documentation have been agreed
Process documentation upon.  
The process for documenting the process in written and graphic form has been  
Elements of a control plan checklist

Item Checked
Control plan (title)  
Reference number  
Team members  
Process owner  
Page  
Original date  
Revision date  
Process customer  
Process description  
Critical customer
requirements  
Outcome indicators  
Part or process step  
KPIV inputs  
KPOV outputs  
Process specification  
KPIV inputs  
KPOV outputs  
Capability  
Measurement system  
When (frequency)  
Where/what  
Who  
Method  
Reaction plan  
Notes/misc.  

CONTROL PLAN MANAGEMENT CHECKLIST

Element Ensure that Checked


Process agreed upon.
The critical customer requirements have been translated into the design
Customer involvement requirements of the process.  

Output
The output of the process or process step has been determined.  

Input
The input of the process or process step has been determined.  
Metrics have been established to assess key indicators.  
Elements of a control plan checklist

Item Checked
Control plan (title)  
Reference number  
Team members  
Process owner  
Page  
Original date  
Revision date  
Process customer  
Process description  
Critical customer
requirements  
Outcome indicators  
Part or process step  
KPIV inputs  
KPOV outputs  
Process specification  
KPIV inputs  
KPOV outputs  
Capability  
Measurement system  
When (frequency)  
Where/what  
Who  
Method  
Reaction plan  
Notes/misc.  

CONTROL PLAN MANAGEMENT CHECKLIST

Element Ensure that Checked


Metrics

Measurement technique
The measures, charts, frequency, and method of sampling have been determined.  
The minimum and maximum specifications and units of measure have been
Specifications identified.  
The method for controlling the process has been planned and reviewed by the Six
Control method Sigma team.  
Clear operator instructions are in place for actions to be taken when  
Elements of a control plan checklist

Item Checked
Control plan (title)  
Reference number  
Team members  
Process owner  
Page  
Original date  
Revision date  
Process customer  
Process description  
Critical customer
requirements  
Outcome indicators  
Part or process step  
KPIV inputs  
KPOV outputs  
Process specification  
KPIV inputs  
KPOV outputs  
Capability  
Measurement system  
When (frequency)  
Where/what  
Who  
Method  
Reaction plan  
Notes/misc.  

CONTROL PLAN MANAGEMENT CHECKLIST

Element Ensure that Checked


Reaction plan nonconformance or process instability is identified.
Suppliers are committed to the input requirements, including materials, timing,
Supplier involvement and cost.  
The process has been examined in light of its integration with, and its effect on,
Integration related and contiguous processes.  
Six Sigma team members are aware of the configuration of roles and
Team responsibilities and are capable of performing their jobs.  
The executive sponsor is supportive of the process control plan.  
Elements of a control plan checklist

Item Checked
Control plan (title)  
Reference number  
Team members  
Process owner  
Page  
Original date  
Revision date  
Process customer  
Process description  
Critical customer
requirements  
Outcome indicators  
Part or process step  
KPIV inputs  
KPOV outputs  
Process specification  
KPIV inputs  
KPOV outputs  
Capability  
Measurement system  
When (frequency)  
Where/what  
Who  
Method  
Reaction plan  
Notes/misc.  

CONTROL PLAN MANAGEMENT CHECKLIST

Element Ensure that Checked


Executive support
The process owner understands the configuration and documentation needs of the
Process owner process, and has accepted accountability for process accuracy and timeliness.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy