0% found this document useful (0 votes)
34 views136 pages

Technical Guideline Monitoring QoC RMNCAH

This guide by the World Health Organization focuses on measuring and monitoring quality of care (QoC) to enhance maternal, newborn, child, and adolescent health services. It provides practical guidance for program managers on selecting QoC indicators, assessing health information systems, and improving data quality to facilitate effective quality improvement initiatives. The document emphasizes the importance of regular measurement and offers resources applicable across various health system contexts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views136 pages

Technical Guideline Monitoring QoC RMNCAH

This guide by the World Health Organization focuses on measuring and monitoring quality of care (QoC) to enhance maternal, newborn, child, and adolescent health services. It provides practical guidance for program managers on selecting QoC indicators, assessing health information systems, and improving data quality to facilitate effective quality improvement initiatives. The document emphasizes the importance of regular measurement and offers resources applicable across various health system contexts.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 136

Measuring and monitoring quality of

care to improve maternal, newborn,


child and adolescent health services
A practical guide for programme managers
Measuring and monitoring quality of
care to improve maternal, newborn,
child and adolescent health services
A practical guide for programme managers
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services: a
practical guide for programme managers
ISBN 978-92-4-010573-7 (electronic version)
ISBN 978-92-4-010574-4 (print version)
© World Health Organization 2025
Some rights reserved. This work is available under the Creative Commons Attribution-NonCommercial-ShareAlike
3.0 IGO licence (CC BY-NC-SA 3.0 IGO; https://creativecommons.org/licenses/by-nc-sa/3.0/igo).
Under the terms of this licence, you may copy, redistribute and adapt the work for non-commercial purposes,
provided the work is appropriately cited, as indicated below. In any use of this work, there should be no suggestion
that WHO endorses any specific organization, products or services. The use of the WHO logo is not permitted. If
you adapt the work, then you must license your work under the same or equivalent Creative Commons licence. If
you create a translation of this work, you should add the following disclaimer along with the suggested citation:
“This translation was not created by the World Health Organization (WHO). WHO is not responsible for the content
or accuracy of this translation. The original English edition shall be the binding and authentic edition”.
Any mediation relating to disputes arising under the licence shall be conducted in accordance with the mediation
rules of the World Intellectual Property Organization (http://www.wipo.int/amc/en/mediation/rules/).
Suggested citation. Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent
health services: a practical guide for programme managers. Geneva: World Health Organization; 2025. Licence:
CC BY-NC-SA 3.0 IGO.
Cataloguing-in-Publication (CIP) data. CIP data are available at https://iris.who.int/.
Sales, rights and licensing. To purchase WHO publications, see https://www.who.int/publications/book-orders.
To submit requests for commercial use and queries on rights and licensing, see https://www.who.int/copyright.
Third-party materials. If you wish to reuse material from this work that is attributed to a third party, such as tables,
figures or images, it is your responsibility to determine whether permission is needed for that reuse and to obtain
permission from the copyright holder. The risk of claims resulting from infringement of any third-party-owned
component in the work rests solely with the user.
General disclaimers. The designations employed and the presentation of the material in this publication do not
imply the expression of any opinion whatsoever on the part of WHO concerning the legal status of any country,
territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. Dotted
and dashed lines on maps represent approximate border lines for which there may not yet be full agreement.

The mention of specific companies or of certain manufacturers’ products does not imply that they are endorsed
or recommended by WHO in preference to others of a similar nature that are not mentioned. Errors and omissions
excepted, the names of proprietary products are distinguished by initial capital letters.
All reasonable precautions have been taken by WHO to verify the information contained in this publication.
However, the published material is being distributed without warranty of any kind, either expressed or implied.
The responsibility for the interpretation and use of the material lies with the reader. In no event shall WHO be liable
for damages arising from its use.
Editing and design by Inis Communication
Contents

Acknowledgements v

Abbreviations viii

1. About the guide 1


1.1 Purpose and focus of the guide 1
1.2 How the guide was developed 4
1.3 How to use the guide 6
1.4 Audiences for the guide 6

2.Understanding quality of care measurement 9


2.1 What is quality of care? 9
2.2 Measuring quality of care 10

3. Selecting quality of care indicators to monitor and guide


improvement 17
3.1 Key messages 18
3.2 Chapter overview 18
3.3 Key terms and concepts 18
3.4 Practical guidance 21
3.5 Country example 28

4. Assessing and strengthening health information systems to


measure and monitor prioritized quality of care indicators 31
4.1 Key messages 32
4.2 Chapter overview 32
4.3 Key terms and concepts 33

iii
4.4 Practical guidance  37
4.5 Country example 51

5. Tracking and analysing quality of care indicators to guide


improvement 55
5.1 Key messages 56
5.2 Chapter overview 56
5.3 Key terms and concepts 56
5.4 Practical guidance 60
5.5 Country example 72

6. Assessing and improving data quality to strengthen quality


improvement results and stakeholder trust 81
6.1 Key messages 82
6.2 Chapter overview 82
6.3 Practical guidance  85
6.4 Country example 92

7. Strengthening quality improvement measurement capability


of key actors  95
7.1 Key messages 96
7.2 Chapter overview 96
7.3 Key terms and concepts 97
7.4 Practical guidance  99
7.5 Country example 102

References  104

Annex. Detailed metadata for core MNCH QoC indicators 108

iv Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Acknowledgements

The World Health Organization (WHO) gratefully acknowledges the role of the following
individuals, organizations and governments that contributed to the conception, technical
writing, peer-review and finalization of this guide.

WHO leadership

Moïse Muzigaba, from the WHO Department of Maternal, Newborn, Child, and Adolescent
Health and Ageing (MCA) served as the responsible technical officer and coordinated the
development of this guide under the oversight of Theresa Diaz (MCA).

Lead writers

This guide was conceived and primarily written by Moïse Muzigaba (MCA) and Kathleen
Hill (United States Agency for International Development (USAID) MOMENTUM Country
and Global Leadership Project and Jhpiego, United States of America).

Contributors

External experts

Sincere gratitude goes to the following experts who were involved in the initial
conceptualization and drafting of some sections of the guide: Sodzi Sodzi-Tettey (USAID
MOMENTUM Country and Global Leadership Program and the Institute for Healthcare
Improvement, Ghana), Stephen Luna-Muse (USAID MOMENTUM Country and Global
Leadership Program and the Institute for Healthcare Improvement, United States of
America), Debra Jackson (London School of Hygiene and Tropical Medicine, and the
University of the Western Cape, United Kingdom of Great Britain and Northern Ireland),
and Tricia Bolender (USAID MOMENTUM Country and Global Leadership Program and
the Institute for Healthcare Improvement, United States of America).

WHO staff and consultants

Special gratitude is extended to the following WHO staff from various regional and country
offices, as well as a consultant, for their invaluable contributions to the development,
refinement and validation of the Health information system landscape assessment (HISLA)
tool: Josephine Agyeman-Duah (WHO consultant), Shogo Kubota (WHO Regional Office
for the Western Pacific), Delgermaa Vanya (WHO Regional Office for the Western Pacific),
Zhao Li (WHO Regional Office for the Western Pacific), Ogusa Shibata (WHO Regional
Office for the Western Pacific), Justice Sitsofe Yevugah (WHO country office in Sierra
Leone), Makeba Shiroya (WHO country office in Kenya), Kenneth Mutesasira (WHO country
office in Uganda), Bongomin Bodo (WHO country office in Uganda), Kurabachew Alemu
(WHO country office in Uganda), Susan Kambale (WHO country office in Malawi), Solome
Nampewo (WHO country office in Malawi), Teshome Desta Woldehanna (retired staff, WHO
Regional Office for Africa), Leonard Cosmas (WHO country office in Kenya) and Assumpta
Muriithi (retired staff, WHO Regional Office for Africa).

Acknowledgements v
WHO reviewers

Special thanks are extended to the following WHO reviewers from its headquarters and
regional and country offices for their valuable independent technical review of the guide:
Blerta Maliqi (WHO Department of Integrated Health Services), Elizabeth Katwan (MCA),
Jean Pierre Monet (MCA), Sonali Vaid (WHO Regional Office for the Western Pacific), Hillary
Kipruto Kipchumba (WHO Regional Office for Africa) and Binyam Hailu Getachew (WHO
country office in Sierra Leone).

External peer reviewers

WHO is grateful to the following expert peer reviewers, in alphabetical order, for their
independent technical review of the content and organization of the guide: Aluvaala
Jalemba (School of Medicine, University of Nairobi, Kenya), Eyob Gebretsadik
(independent expert, Ethiopia), Jil Molenaar (University of Antwerp, Belgium), Martin
Dohlsten (United Nations Children’s Fund, Nigeria), Remi Mwamba (United Nations
Children’s Fund, United States of America), and Tamar Chitashvili (John Snow Inc., United
States of America). WHO also acknowledges the valuable support of several members
of the Life Stages Quality of Care Metrics Technical Working Group (LSQM TWG), who
provided verbal feedback during oral presentations on multiple occasions throughout
the development of this guide.

Government institutions

WHO expresses gratitude to the following individuals, in alphabetical order, from


government institutions within the Network for Improving Quality of Care for Maternal,
Newborn, and Child Health who participated in the needs assessment survey that informed
the development of this guide: Anthony Adofo Ofosu (Ghana Health Services, Ghana),
Ftalew Dagnaw (Ministry of Health Ethiopia, Ethiopia), Mukome Nyamhagatta (Ministry
of Health, Community Development, Gender, Elderly and Children, United Republic of
Tanzania), Padmini Kashyap (Ministry of Health & Family Welfare, India), Raffii Jaffar Ali
(Ministry of Health Zanzibar, United Republic of Tanzania), S.K. Sikdar (Ministry of Health
and Family Welfare, India), and Shegaw Mulu Tarekegn (Ministry of Health of Ethiopia,
Ethiopia). Special thanks are also extended to the numerous government representatives,
health care workers, quality improvement teams, and other government stakeholders
who contributed to the activities and outcomes highlighted in the country examples
featured in this guide.

Technical and implementing partners

WHO appreciates the contributions of many technical and implementing partners who
supported the Network for Improving Quality of Care (QoC) for Maternal, Newborn and
Child Health. Their support helped highlight the need for this guide and contributed to
the evidence and selected country examples included. These include, in alphabetical
order: All India Institute of Medical Sciences (AIIMS), Department for International
Development – Deutsche Gesellschaft für Internationale Zusammenarbeit, Institute
for Healthcare Improvement, Jhpiego, Japan International Cooperation Agency, KEMRI
Wellcome Trust, London School of Hygiene and Tropical Medicine, Management Sciences
for Health, Partnership for Maternal, Newborn & Child Health, Save the Children, University
College London, United Nations Population Fund, the United Nations Children’s Fund,
and University Research Co.

vi Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Assessment and management of conflicts of interest

All external experts involved in the development of this guide, including the lead and
contributing writers, submitted a signed declaration of interest to WHO, disclosing any
potential conflicts of interest that might influence or could reasonably be perceived to
influence their objectivity and independence in relation to the guide’s content. WHO
thoroughly reviewed each declaration and determined that none posed an actual or
reasonably perceived conflict of interest concerning any aspect of the guide.

Acknowledgements vii
Abbreviations

DQA data quality assurance


DQC data quality control
DQI data quality improvement
HIS health information system
HMIS health management information system
IMNCI integrated management of newborn and childhood illness
LMICs low- and middle-income countries
M&E monitoring and evaluation
MNCH maternal, newborn and child health
MNCAH maternal, newborn, child and adolescent health
PDSA Plan–Do–Study–Act
PPH postpartum haemorrhage
PSBI possible serious bacterial infection
QA quality assurance
QC quality control
QI quality improvement
QoC quality of care
QP quality planning
RHIS routine health information system
SAM severe acute malnutrition
SSNB small and/or sick newborn
TB tuberculosis
USAID United States Agency for International Development
WHO World Health Organization

viii Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
1 About the guide
1.1 Purpose and focus of the guide
Measurement is a core principle of improving health care. Regular measurement of
selected quality of care (QoC) indicators during a time-limited quality improvement (QI)
initiative helps managers and health worker teams track progress and guide iterative
changes as they work together to improve care. This guide focuses specifically on the
measurement of QoC for the purpose of improving care. The various purposes for
measuring QoC, and implications for measurement methods, are reviewed in Chapter 2.

The guide has been designed to complement World Health


Organization (WHO) implementation guidance for improving NOTE!
quality of facility-based maternal, newborn and child health (MNCH)
services (1), emphasizing the subnational health system (e.g. region, The terms ‘health
district) as a primary unit for implementing and monitoring large- information system’ (HIS),
scale initiatives to improve QoC. ‘routine health information
system’ (RHIS) and ‘health
The guide offers practical guidance and references to key resources management information
that can be used across countries regardless of the maturity of system’ (HMIS) are explained
their health information systems (HIS), highlighting measurement in more detail in Chapter 4.
methods and data sources that are likely to be feasible and
sustainable on a routine basis and with a modest investment of resources. Despite
challenges related to the low availability of QoC data in routine health information
systems (RHIS) in many low-resource settings, there are often opportunities to optimize
use of existing QoC data in RHIS for the purpose of improvement while simultaneously
working to strengthen inclusion of priority QoC data elements in RHIS.

The guide focuses on methods for the regular measurement of QoC indicators for the
purpose of improving care. The guide does not address periodic resource-intensive
methods for assessing QoC for other purposes, such as the use of health facility
assessments for planning or quality assurance (e.g. accreditation). However, it does
reference tools such as harmonized health facility assessments and service provision
assessments that can be used to periodically assess QoC to guide planning and track
progress against global and country targets.

While the focus of the guide is on measurement for improving quality of maternal,
newborn, child and adolescent health (MNCAH) care, inclusive of nutrition interventions,
the guide can also be applied to other technical areas such as sexual and reproductive
health and healthy ageing.

 1
Fig. 1. The role of measurement in applying quality improvement methods to
improve quality of care

Step Select an area of health care to improve based on important outcomes, local

1 priorities, local burden of disease, and quality problems based on analysis of


existing data (e.g. routine health information system, health facility assessment).

Step

2 Set a bold, ambitious and measurable improvement aim.

Step Use quality tools (e.g. fishbone, process maps, 5 whys, etc.) to understand the

3 root causes of quality of care gaps in prioritized areas and to develop change
ideas to test.

Step Select a set of quality of care indicators that will be measured to track progress

4 and guide improvement (including input, process, outcome and balancing


indicators) (Chapter 3).

When possible, collect at least six points of baseline data based on retrospective
Step

5
analysis of routine data (if available). If not possible, rapidly collect weekly or
daily data to ascertain the current performance of the health care process(es) to
be improved (Chapter 5).

Step

6
Develop and iteratively test and implement cycles of change for identified
change ideas (e.g. Plan–Do–Study–Act).

Step Assess the effect of tested changes by monitoring and analysing patterns in

7 quality of care indicator results over time to identify signals, trends and shifts
(Chapter 5).

Step

8
Use qualitative data to assess the feasibility, acceptability and likely
sustainability of the changes in the local context.

Step If specific changes show improvement in quality of care indicators and the

9 changes are feasible and sustainable in the local context, expand the scale of
testing and adopt the changes.

Step
Continue tracking the quality of care indicators for at least six months to be sure

10
the gains are being held. Thereafter, periodically measure one or two selected
quality of care indicators to ensure that gains are maintained over time and
adjust if needed (quality control).

2 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
For the purpose of this guide, QI is defined as the process by which health workers – as
part of multi-cadre QI teams – analyse the root causes for poor QoC, develop changes to
address those causes, and iteratively test and adapt changes while regularly monitoring
selected QoC indicators.

An outline of steps that are commonly involved in the application of QI methods to


improve care is shown in Fig. 1. Although the sequence of steps may differ across various
settings, most QI initiatives will include many of these steps.

This guide focuses on the measurement steps in Fig.1: Selection of QoC indicators
(step 4) and the use of results to track improvement and guide changes (step 7), which
are essential in a broader QI process to improve care.

In addition to QI processes, many health system interventions contribute to improving


and sustaining quality care (e.g. interventions to strengthen health governance, policy,
financing, HIS, health infrastructure, commodities and health workforce). Similarly,
the measurement and use of QoC data to improve care requires enabling system
interventions, including interventions to build QI capability of health workers (inclusive
of QoC measurement) and interventions to strengthen the HIS to measure priority QoC
indicators and ensure the quality of data.

As highlighted in Fig. 2, this guide focuses on the selection and use of QoC indicators to
improve care (inner areas) and on the necessary enabling system interventions that make
this measurement possible (outer areas).

Fig. 2. Strengthening quality of care measurement to support care


improvements for women, newborns and children

Assessing and strengthening


health information systems
to measure and monitor prioritized
quality of care indicators

Selecting Tracking and


quality of care analysing
indicators quality of care
to monitor and guide indicators
improvement to guide improvement

Assessing Strengthening
and improving quality
data quality improvement
to strengthen quality measurement
improvement results capability
and stakeholder of key actors
trust

1. About the guide 3


This guide consists of seven linked chapters. Chapter 1 describes the purpose and
structure of the guide, how it was developed, its primary audiences, and how it can be
used as part of micro- (health facility), meso- (subnational), and macro-level (national)
QI initiatives. Chapter 2 introduces the concept of QoC measurement and the different
purposes of measuring QoC at the country level. Chapters 3 to 7 constitute the main
body of the guide and address the selection and use of QoC indicators to improve care as
part of a broader QI process, and the enabling system interventions to support effective
measurement. Chapter 3 provides guidance on how to select QoC indicators linked to
measurable improvement aims. Chapter 4 describes key steps involved in assessing
the readiness of the local HIS to monitor and guide selection of indicators, and steps
that can be taken to introduce MNCAH QoC indicators that are important for sustained
monitoring in the HIS. Chapter 5 reviews key steps and guidance for regularly calculating,
plotting and interpreting QoC indicators over time to guide the iterative changes required
to improve care – emphasizing that regular measurement is essential but insufficient to
improve care if not embedded in a broader QI process. Chapter 6 outlines a systematic
approach to assessing and improving the quality of data as part of QI interventions.
Chapter 7 provides guidance on how to define and build essential health worker QI
capabilities, including the QoC measurement knowledge and skills needed by specific
actors in a given health system.

Chapters 3–7 share a standard format comprising six components: key messages; a
brief chapter overview; definitions of key terms and concepts used; practical guidance
including a table of key actions at national, subnational and health facility levels ; and a
country example to illustrate the application of the guidance in a real-life context.

1.2 How the guide was developed


1.2.1 The main content
The main content of the guide was developed through a systematic process, incorporating
inputs from global and country-level stakeholders, as well as current scientific evidence.
The process included five phases to ensure responsiveness to users’ needs.

Phase 1. Global survey


An initial survey was conducted with key stakeholders to identify needs, challenges and
expectations regarding QoC measurement for the purpose of improving care for women,
newborn, children and adolescents. WHO designed the survey with input from a small
group of experts who supported the conceptualization of the guide. The survey assessed
respondents’ perspectives on important QoC measurement themes and evidence to
include in the guide, as well as their recommendations for the target audiences, structure
and length of the guide.

Examples of survey questions included:

• For whom should the guide be developed, in your view (target audience)?
• What are the top three to five QoC measurement topics that should be included in
the guide?
• What is your recommendation for how best to structure the guide?
• How long should the guide be?

4 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
The survey was administered digitally (using Survey Monkey) and shared with WHO
regional offices for distribution to their constituent countries to ensure broad reach across
WHO Member States. Responses were analysed to identify key themes, and the results
were used to inform the initial conceptualization and development of the guide.

Phase 2. Global workshop


In March 2023, WHO organized a three-day global stakeholder meeting of the Network for
Improving Quality of Care for Maternal, Newborn and Child Health – hereafter referred to as
“the network” – to engage QoC champions from government, implementing partners and
other stakeholders to reflect on the previous five years’ efforts to integrate and systematize
QoC in health systems and MNCAH programmes in ten participating countries (2). The
meeting also explored challenges and lessons learned during implementation of the
network in participating countries, examined the findings of two independent evaluations
of the network, and recommended actions on how the network should evolve to
respond to the unfinished and emerging QoC agenda. The meeting included a 90-minute
QoC measurement deep-dive session, during which stakeholders identified gaps and
opportunities around QoC measurement in the network and beyond and made specific
recommendations to address them. One of the recommendations was to develop a practical
guide on how to measure and monitor QoC for the purpose of improving MNCAH services.
Participants recommended the inclusion of the following topics: capacity building for QoC
measurement; HIS strengthening to measure QoC; linking measurement to improvement;
and selecting indicators as part of the design of QI initiatives. Insights from the session
were used to shape the scope and structure of the guide and to ensure its responsiveness
to the priority needs and operational challenges faced by country programme managers.

Phase 3. Literature review


To ensure the guide is grounded in the current and relevant evidence, the technical writers
reviewed existing technical resources, global guidance, and peer-reviewed literature in the
topic area of each chapter. This review along, with a phased peer review, helped to identify
important resources and literature, many of which are cited in the guide. The aim was
to identify best practices, methodologies and practical resources in QoC measurement
for the purpose of improving MNCAH services. The insights from the literature review
were used to develop practical content for the guide and to identify useful supporting
resources, as well as ensuring alignment with WHO MNCAH strategic priorities and existing
technical frameworks.

Phase 4. Ongoing peer review


To ensure that the guide is technically robust, practical and harmonized with the broader
WHO QoC measurement efforts, the lead writers engaged with selected members of the
WHO Life Stages QoC Metrics Technical Working Group (TWG) and other experts to provide
periodic peer review at key stages of the guide’s development. This strengthened overall
clarity and coherence of the guide, ensured practical and technically sound guidance
for country-level implementation, and helped ensure that the guide reflects the latest
evidence and best practices in QoC measurement.

Phase 5. Final review


To ensure that the guide meets the highest technical standards and is aligned with the
WHO global MNCAH measurement agenda, the final draft of the guide underwent a review

1. About the guide 5


by the leadership of two additional technical advisory groups: the Mother and Newborn
Information for Tracking Outcomes and Results (MoNITOR) (3) and the Child Health
Accountability Tracking (CHAT) (4).

1.2.2 Health information system landscape assessment tool


The Health information system landscape assessment (HISLA): a tool for assessing the
feasibility of collecting, reporting, and using quality of care indicators (5) – hereafter referred
to as the “HISLA tool” – was designed to assess the feasibility of collecting, reporting and
using QoC indicators to measure the quality of selected services across life stages and
across various contexts, and has been shaped through rigorous testing and adaptation
in multiple settings as described below. The draft HISLA tool was first pilot tested in 2023
in Sierra Leone, which WHO identified as one of the front-runner QoC Network countries
on the basis of their progressive interest in QoC measurement. The first version of the
piloted HISLA tool assessed the readiness of the country’s HIS to collect, report and use
25 paediatric and young adolescent QoC indicators developed by WHO (see Chapter 2 and
Chapter 4). The pilot testing of the HISLA tool also embedded implementation research
(WHO Research ethics clearance ref: 0003963) to evaluate the feasibility and acceptability
of the tool in real-world settings, and to identify context-specific adaptations needed for
broader application in other settings.

Learning from the pilot test in Sierra Leone informed a revision of the HISLA tool and an
approach for its use. The refined tool and process of its application was subsequently
applied in Malawi and further iteratively refined and validated in Kenya and Uganda, with
a final validation phase in the Lao People’s Democratic Republic in the WHO Western
Pacific Region. During the final validation phase in Lao People’s Democratic Republic, the
tool was used to assess local HIS readiness to measure, report and use an expanded set
of MNCAH QoC indicators. The insights gained from testing and validating the HISLA tool
in Sierra Leone, Malawi, Kenya, Uganda and Lao People’s Democratic Republic informed
the version of the HISLA tool. Additional findings from the research also informed the
development of an implementation blueprint for other countries, which is detailed in
Chapter 4.

1.3 How to use the guide


The chapters in this guide are designed to be read in any order. However, some sections
expand on concepts introduced in earlier chapters. Occasional cross-references are
provided to help users identify connected sections and gain a clearer understanding of
concepts. A key tenet of this guide is to: “Start where you are, using the resources you
have”. Since managers and stakeholders work in a wide range of contexts and will be
starting from different points along a QoC measurement journey, this guide is not intended
as a step-by-step manual. Rather, it offers a practical overview of considerations, along
with references to additional resources in key areas that are foundational to achieving
effective QoC measurement and monitoring for the purpose of improving MNCAH care.

1.4 Audiences for the guide


This guide was informed by, and developed for MNCAH, QI and HIS managers and
stakeholders at national, subnational and service delivery levels in low- and middle-
income countries (LMICs). Examples of key users are provided in Fig. 3.

6 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 3. Target audiences for the guide

Fig. 3. Target audiences for the guide

National-level actors
(e.g. representatives of different national
ministry of health directorates or divisions)

Directors and
managers
of health information and
health informatics entity

Directors
Directors and and managers
managers of the quality
of MNCAH management
programmes programmes

Service delivery actors Subnational-level actors


(e.g. facility managers, health care workers, (e.g. regional, provincial, district,
facility-based quality improvement teams, etc.) county, woreda managers, etc.)

Tertiary-level
facilities Ministry of health
(e.g. regional, provincial,
Secondary-level district, county
facilities managers, etc.)

Public health Primary-level Private health Managers


services facilities services Partners
(e.g. those managing (e.g. international
programmes for organizations
Community- non-profit, faith- supporting the
based services based and private ministry of
sector health)
organizations)
Publicly-
funded private
health
services

(Note: Health system and stakeholder terminology may vary in different settings).

At the national level, the guide can be used by representatives of the ministry of health
responsible for MNCAH programmes, HIS, quality management, and other relevant
programmes. At subnational level, HIS managers, MNCAH programme managers and
QoC programme managers can use the guide to strengthen measurement in programmes
to improve QoC. Indeed, these managers are encouraged to work together closely due
to the importance of combined measurement, QI and clinical expertise for the robust
design, implementation and monitoring of initiatives to improve QoC. At the facility level,
health facility managers, health care workers, members of facility QI teams including
community members, and other relevant actors can all use the guide to strengthen
measurement as one core foundation of their improvement work. Other users of the
guide may include managers of private health care networks (e.g. faith-based health
care networks), managers of public and private health care facilities, community health
managers, as well as local organizations and implementing partners supporting the
ministry of health with QI work.

1. About the guide 7


2 Understanding quality
of care measurement

2.1 What is quality of care?


The quality of health services is one of the most important determinants of health
outcomes amenable to health care for individuals and populations, in addition to
socioeconomic and other determinants of health. In many low-income settings, poor-
quality care leads to a greater number of preventable deaths than non-utilization of health
care (6). Recent estimates indicate that in such settings, about eight million individuals
die annually from conditions amenable to health care and that 60% of these deaths are
due to poor quality care, while the remaining deaths are due to lack of access or under-
utilization of health care services. In other words, poor quality care is a greater contributor
to mortality than inadequate access to care.

WHO defines QoC as: “…the degree to which health services for individuals and populations
increase the likelihood of desired health outcomes and are consistent with current
professional knowledge” (7). Quality health care services should be effective, safe, people-
centred, timely, equitable, integrated and efficient (Box 1).

Box 1. Definitions of the seven dimensions of quality of care


• Effective: Providing services based on scientific knowledge and evidence-based guidelines.
• Safe: Delivering health care that minimizes risks and harm to service users, including avoiding
preventable injuries and reducing medical errors.
• People-centred: Respecting and responding to individuals’ preferences, needs and values within
health services that are organized around the needs of people.
• Timely: Reducing delays in providing and receiving health care.
• Equitable: Providing health care that does not differ in quality according to personal characteristics
such as gender, race, ethnicity, caste, geographical location or socioeconomic status.
• Integrated: Coordinating health care across levels and providers of health services and across
sectors and making available the full range of health services throughout the life course.
• Efficient: Providing health care in a manner that maximizes resource use and avoids waste.
Source: Adapted from (8).

 9
2.2 Measuring quality of care
2.2.1 The Donabedian framework
Measurement of QoC has gained increasing attention in recent decades as recognition of
the importance of QoC for health outcomes and people’s experience of care has increased.
However, there are much earlier examples of measuring quality. For example, Florence
Nightingale was a pioneer for measurement and use of statistics for improving health care
quality during the Crimean War in the 1850s. She hired data collection teams to record
causes of death among soldiers, convinced leading statisticians to help her interpret her
data to demonstrate that poor sanitation and hygiene standards for wounded soldiers
were a greater cause of mortality than the injuries themselves, and designed a new data
visualization method to convince policy-makers of the importance of her findings (9).

In 1966, Avedis Donabedian proposed a framework for measuring QoC that remains
widely used today (10) and is applied in several of the country examples in this guide.

Fig. 4. Donabedian’s quality of care framework


Fig. 4. Donabedian’s quality of care framework

Inputs/structures Processes Outcomes


Physical, structural and Specific health care processes Short-, medium-, and
organizational components (e.g. triage, clinical assessment, long-term results of health
necessary for provision of services counseling, treatment) care (e.g. mortality,
morbidity, patient-reported
health and experience of
care outcomes, etc.)

Source: Adapted from (11).

The Donabedian framework (Fig. 4) proposes three components of QoC assessment:


inputs/structures, processes and outcomes (11).

• Inputs/structures are defined as the physical and organizational attributes of


care settings such as health facilities, equipment, qualified health care personnel,
technology platforms, clinical guidelines and protocols, essential commodities, etc.
Measurement of inputs assesses the readiness of a health system or service to offer
quality health care.
• Processes focus on what is actually done during the provision of health care (e.g. flow
of patients in a specific service, triage of clients in an emergency department, clinical
assessment and diagnosis of individual patients, treatment of specific conditions
in individuals, counselling, integration, and coordination of health care processes
etc.). Measurement of care processes may assess the effectiveness, safety, people-
centredness, efficiency, timeliness and equity of such processes.
• Outcomes are the status of the health, well-being or experience of care of an individual
or population, which are influenced by inputs and care processes in the health care
setting as well as by other determinants of health. In the Donabedian framework, it is the
combination of inputs/structures and care processes that influences the most important
measure of quality of health care: health outcomes that are amenable to health care.

10 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Although the relationship between inputs/structures, processes and outcomes is not
always straightforward, essential inputs or structures are pre-conditions for delivering
quality services that, combined with health care processes, influence the likelihood of
better health outcomes. Inputs must be used correctly and consistently as part of health
care processes to influence the health outcomes for which they are required. An important
proposition in the 2018 Lancet Quality Commission is that availability of inputs alone has
little to no effect on outcomes (6). Similarly, processes of care that utilize inappropriate
inputs (e.g. inappropriate medication, non-sterile surgical instruments, unskilled health
personnel, etc.) will be less likely to improve health outcomes or in some cases may
cause harm.

2.2.2 Different purposes of measuring quality of care


Measurement of QoC may serve different purposes in a health system. Two complementary
frameworks are presented below. One framework considers QoC management functions
within a health system and the measurement needs to support each function (the Juran
Trilogy). The other framework considers the QoC measurement needs of key actors at
distinct levels of a health system based on the roles they play.

a. The Juran trilogy framework


One useful framework for conceptualizing different purposes for measuring QoC is the
‘Juran trilogy’ (12), which proposes three important QoC management functions in any
organization or health system: quality planning (QP), quality improvement (QI) and
Fig. 5. Threequality control
components (QC)Juran
of the (Fig.trilogy
5).

Fig. 5. Three components of the Juran trilogy

Quality
planning

Juran
trilogy

Quality Quality
control improvement

Source: Adapted from (12).

2. Understanding quality of care measurement 11


Working definitions of QP, QI and QP vary according to different organizations, specific
documents, and even within individual organizations, including United Nations agencies.
This guide proposes the following definitions, which are adapted from a variety of sources.

• Quality planning (QP): Process of defining goals for quality health care and defining
structures, strategies and activities required to achieve those goals (e.g. governance
structures, national strategies, subnational costed operational plans, etc.)
• Quality improvement (QI): Process by which local multi-cadre teams analyse the root
causes for poor QoC, develop proposed changes to address those causes, and iteratively
test and adapt changes while regularly monitoring selected QoC indicators. It should
be noted that multiple interventions, in addition to QI, contribute simultaneously to
improving QoC (e.g. policy development, clinical capacity strengthening, distribution
of skilled providers.)
• Quality control (QC): Internal process of assessing quality of care within a local system
(e.g. district, facility) to verify that performance meets standards and remains stable
(e.g. accuracy of laboratory testing, quality of care provided in a specific service, etc.).

Another term commonly used alongside QP, QI and QC is quality assurance (QA).

• Quality assurance (QA): Process of assessing whether specified health system/service


standards have been met (i.e. including those related to QP, QI and QC), usually for the
purpose of regulation by an external authority. Examples of QA include certification
and accreditation.

The common purposes of measuring QoC can be mapped to the Juran trilogy components
and QA, with implications for different measurement approaches (Table 1).

Table 1. Common purposes of measuring QoC mapped to the three


components of the Juran trilogy plus quality assurance and implications
for measurement methods

Component of the Juran


Common purposes for trilogy (QP, QI, QC) plus
measuring QoC quality assurance (QA) Measurement approach
Periodically measure QP Typically, measurement is infrequent
QoC to guide planning and may use resource-intensive
(e.g. QoC policy, strategy, methods (e.g. health facility
operational plans, assessment, household survey)
improvement aims)
Track progress against
country and global
targets
Regularly monitor QoC for QI Typically, measurement is very
the purpose of improving regular (at least monthly) and is led
care during a specific by on-the-ground improvement
improvement time actors (e.g. district managers, facility
period, using selected QI teams) using routine health
QoC indicators information sources, as feasible

12 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Component of the Juran
Common purposes for trilogy (QP, QI, QC) plus
measuring QoC quality assurance (QA) Measurement approach
Periodically measure QC Typically, measurement is periodic
QoC to assess adherence (e.g. 2–4 times per year) and is led
with standards and to internally (e.g. by district or facility
verify that quality is managers) using routine health
being maintained once information sources, as feasible
improved
Periodically measure QA Typically, measurement is infrequent
QoC for the purpose and is led externally (e.g. by
of regulation or accreditation body) using more
accreditation resource-intensive methods

b. QoC measurement from the perspective of different actors


Different actors have varying needs for QoC measurement based on the roles they
play in supporting QoC in a health system. Most large-scale quality programmes target
interventions and actors at multiple system levels. WHO implementation guidance for
improving quality of facility-based maternal, newborn and child health services outlines
key actions and actors at national, subnational and facility levels (2). As part of a multi-
country network for improving QoC for MNCH, a QoC monitoring framework consisting
of four measurement components (13) was developed to support the monitoring needs
of actors operating at specific system levels in participating countries (14). The four
measurement components and their purposes and key users are described in Fig. 6.

Fig. 6. The four components of the quality of care network monitoring


framework
Fig. 6. The four components of QoC measurement

Quality improvement indicator catalogue

1 Core
indicators 2 Quality
improvement
indicators
3 Subnational
performance
indicators
4 Implementation
milestones

Prioritized small set of Flexible menu of Flexible menu of indicators Prioritized set of milestones
input, process, outcome prioritized indicators to to support subnational to track whether quality of
and impact indicators for support rapid managerial and care programme activities
use by all stakeholders at improvement in quality of leadership functions in are being implemented as
every level of the health care led by facility-based improving and sustaining intended, for use by quality
system to track and quality improvement quality of care in facilities of care programme
compare progress across teams and supported by managers at national and
sites and levels subnational managers subnational levels

Source: Adapted from (14).

For additional discussion of the QoC measurement needs of different actors, see section
3.4.7, which reviews the needs of different data users in a subnational QI initiative.

2. Understanding quality of care measurement 13


Table 2 presents a core set of MNCH indicators (i.e. the first component in Fig. 6), which
were developed based on the WHO standards for improving quality of maternal and
newborn care in health facilities (15) and the WHO standards for improving QoC for children
and young adolescents in health facilities (16). The Network monitoring framework also
includes an expanded catalogue of QoC indicators to support measurement components
2 and 3 in Fig. 6. The core indicators were recommended for adoption as part of a national
HIS, to track priority MNCH inputs, processes and outcomes and to facilitate learning
within and across countries in the network. The availability of data to support calculation
of the core indicators varied across countries in the Network based on the maturity of
the local HIS. Some indicators were designed to measure domains of QoC that were
relatively new to mainstream QoC measurement in LMICs at the time, such as client-
reported experience of care indicators. The detailed metadata for these indicators are
presented in the Annex.

Table 2. WHO recommended core indicators for monitoring QoC for


maternal, newborn and child health in health facilities based on the WHO
quality of care standards

Core maternal and newborn QoC indicators mapped to the QoC standards (15)
1. Institutional maternal mortality ratio (disaggregated by cause)
2. Institutional obstetric case fatality rate
3. Pre-discharge neonatal mortality rate (disaggregated by cause)
4. Institutional stillbirth rate
5. Immediate administration of a uterotonic after birth for postpartum haemorrhage
prevention
6. Breastfeeding initiation within one hour of birth
7. Newborns with birthweight documented
8. Kangaroo mother care for newborns weighing 2000 g or less
9. Postpartum counselling for mother and baby
10. Companion of choice during labour and childbirth
11. Physical abuse during labour, or childbirth or postpartum period
12. Verbal abuse during labour, or childbirth or postpartum period
13. Basic hygiene provision
14. Basic sanitation for women and their family
Core paediatric and young adolescent QoC indicators mapped to the QoC standards (16)
1. Institutional child mortality rate (disaggregated by cause)
2. In-hospital paediatric case fatality rate by common paediatric conditions
3. Assessment for the sick children < 5 years old based on the integrated management of
newborn and childhood Illness criteria
4. Treatment of possible severe bacterial infection at outpatient level
5. Kangaroo mother care for newborns weighing 2000 g or less
6. Pneumonia treatment with 1st choice antibiotic for children aged between 7 days and
5 years
7. Management of acute watery diarrhoea among children <5 years old
8. Children and young adolescents with documented malaria test results
9. Treatment of uncomplicated severe acute malnutrition

14 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Core paediatric and young adolescent QoC indicators mapped to the QoC standards (16)
10. Management of anaemia in children and young adolescents
11. Children < 2 years of age with known HIV status for either the mother and/or the child
12. Tuberculosis (TB) evaluation for children and young adolescents with presumptive TB
13. Catch up immunization for children < 2 years old
14. Inappropriate use of antibiotics for cough or cold in children and young adolescents
15. Completion of medical documentation for children and young adolescents
16. Quality of care data reviews for children and young adolescents
17. Knowledge and understanding of the condition and treatment plan among children and
young adolescents or their caregivers
18. Satisfaction with decision-making process for care
19. Pre-discharge counselling on danger signs and feeding for children < 5 years old
20. Awareness of child rights during health care
21. Disrespectful care for the child or caregiver
22. Accompaniment during care
23. Access to play and educational material during hospitalization
24. Clinical mentorship or training for childcare providers
25. Stock out of essential child health medicines

Development of WHO recommended core QoC indicators for antenatal care, postnatal
care, care of small and sick newborns, care of women with obstetric complications, care
of adolescents, and care of ageing adults, is still ongoing; thus, these indicators are not
included in this guide.

2. Understanding quality of care measurement 15


3
Chapter

Selecting quality of
care indicators to
monitor and guide
improvement
3 3. Selecting quality of care indicators
Selecting
quality of care
to monitor and guide improvement
indicators

3.1 Key messages


■ Within the context of a QI initiative, QoC indicators should be
selected based on the improvement aims that are defined for the
initiative.
■ QoC indicators should include the targeted care outcome(s), care
processes and essential inputs to be improved as well as possible
unintended consequences.
■ There is no one-size-fits-all approach to selecting MNCAH QoC
indicators, but there are many useful resources to help guide
selection of QoC indicators.
■ Collection and monitoring of QoC indicators should be feasible in
a given programme setting, using existing data sources whenever
possible.

3.2 Chapter overview


Fig. 1 outlines the common steps for selecting and monitoring QoC indicators to guide
iterative changes in a time-bound QI initiative. This chapter offers practical guidance on
how to select QoC indicators based on the improvement aim(s) identified in a QI initiative,
providing illustrative examples of MNCAH input, process and outcome QoC indicators, as
related to specific improvement aims. The chapter also examines the information needs and
key users of QoC indicator data in a multi-site QI initiative (e.g. a district-wide QI initiative).

3.3 Key terms and concepts


3.3.1 Types of QoC indicators
The terms ‘measures’, ‘metrics’ and ‘indicators’ are often used interchangeably to describe
parameters that can be used to show changes or progress. Their definitions vary across
different organizations and materials. The term ‘indicators’ is used throughout the current
guide because it is more readily understood by many programme managers supporting
improvement work. However, readers should be aware that some of the referenced
resources use alternative terms.

Many programmes seeking to improve QoC monitor four types of QoC indicators –
sometimes called a ‘family of indicators’ – drawing on the well-established Donabedian
framework (10,11). See chapter 2 for a definition of the first three types of indicators (i.e.
input, process and outcome indicators) based on the Donabedian framework. A fourth

18 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
type, ‘balancing indicators’, measures potential unintended consequences in one part
of a system associated with improving care in another part of the system. Balancing
indicators can be input, process or outcome indicators. Table 3 provides an example of
3
Selecting
QoC indicators, categorized by indicator type, for an illustrative paediatric health care quality of care
improvement initiative. indicators

Table 3. Examples of QoC indicators in a paediatric health care improvement


initiative to improve quality of care for children with pneumonia

Improvement aim QoC indicators QoC indicator type


Reduce paediatric Proportion of children <5 years presenting Process
pneumonia case fatality with cough/difficulty breathing that are
rate from 7% to 1% assessed for fast breathing and chest
within 6 months in X in-drawing (per integrated management of
health care facilities by childhood illness protocol)
improving quality of
assessment, diagnosis Proportion of children < 5 years with Process
and treatment of children pneumonia (fast breathing or chest
with pneumonia in-drawing) treated with amoxicillin
Proportion of days with stock-outs of Input
amoxicillin
Paediatric pneumonia case fatality rate Outcome
Paediatric malaria case fatality rate Balancing (outcome)

3.3.2 Relationship between improvement aims and quality of


care indicators in a quality improvement initiative
Within the context of a QI initiative, QoC indicators are developed based on the
improvement aims that are selected during the design of the initiative. One framework,
among several, to guide selection of aims and quality of care indicators is the model for
improvement (18) (Fig. 7). The model for improvement poses three questions: What are
we trying to accomplish (the aim)? How will we know that a change is an improvement
(QoC indicators)? And what change can we make that will result in improvement?

Fig. 7. Model for improvement


Aim
What are we trying to accomplish?

Measures
How will we know that a change is an improvement?

Changes
What changes can we make that will result in improvement?

Act Plan

Study Do

Source: (18).

3. Selecting quality of care indicators to monitor and guide improvement 19


3 The first component of the model, the three questions, guides planning for improvement.
The answer to the first question – What are we trying to accomplish? – helps set an
improvement aim. The second question – How will we know that a change is an
Selecting
quality of care improvement? – guides selection of QoC indicators. Changes identified through the third
indicators question – What changes can we make that will result in improvement? – are tested via
iterative cycles of the second ‘Plan–Do–Study–Act’ (PDSA) component of the model. The
‘study’ element of the PDSA cycle may use qualitative and quantitative data to assess
the effect of a specific change cycle. The model for improvement is one among several
frameworks used for planning, implementing and monitoring interventions to improve
care by applying a QI process.

Box 2 provides examples of strong and weak improvement aim statements for selected
quality domains, based on the first question of the model (i.e. What are we trying to
accomplish?). Improvement aim statements should be specific, measurable, attainable,
relevant and time-bound. A strong improvement aim answers the questions: a) What?
The process or outcome to improve; b) Who? The patients affected and/or involved health
workers; c) How much? The magnitude/size of the change you hope to achieve; and d)
By when? The timeframe for improvement.

Box 2. Illustrative strong and weak improvement aim statements for


various quality domains
Safety of care (cross-cutting)
• Weak aim statement: In Zululand District Municipality, we aim to improve
handwashing practices in all our facilities.
• Strong aim statement: Between 1 and 31 December 2022, we aim to improve
staff handwashing practices before and after patient interactions from a baseline
of 27% to 75% in four health facilities in Zululand District Municipality.
Effectiveness of care (child health)
• Weak aim statement: In Zululand District Municipality, we aim to improve
diagnosis and treatment of pneumonia in children by improving antibiotic use.
• Strong aim statement: Between 15 Jan and 15 July 2024, we aim to reduce
pneumonia deaths by 30% in children hospitalized for pneumonia in four
hospitals in Zululand District Municipality.
Person-centred care (maternal health)
• Weak aim statement: We aim to improve women’s experience of childbirth services.
• Strong aim statement: Between 1 January and 30 June 2024 we aim to increase
the proportion of interviewed women reporting that they were treated with
respect during childbirth in Bamba Maternity Home from 62% to 86%.
Person-centredness and effectiveness of care (adolescent health)
• Weak aim statement: We aim to improve the quality of adolescent health
services in our facilities.
• Strong aim statement: Between 1 January and 31 December 2024, we aim to
increase the proportion of adolescents receiving comprehensive sexual and
reproductive health counselling during facility visits, from a baseline of 40% to
85% in five primary health facilities in the Western Region.

20 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
3.4 Practical guidance
Although there is no one-size-fits-all approach to select
Illustrative
actors
3
Selecting
QoC indicators for use in a QI initiative to improve MNCAH quality of care
care, the considerations below are an important starting A blended team of indicators
point. The development of well-defined QoC indicators programme managers,
and associated measurement methods and data sources QI stakeholders
should be supported by managers and stakeholders with and health workers
a combination of subject matter (e.g. clinical), QI and representing a range
of technical expertise
measurement expertise.
(e.g. QI focal point,
HIS officers, MNCAH
3.4.1 Identify potential (candidate) QoC programme managers,
indicators clinical providers etc.).

Once the area of health care to improve has been identified


and a measurable improvement aim(s) has been agreed by QI stakeholders, the next
step is to identify a list of potential QoC indicators that will be monitored during the QI
initiative to assess progress and guide change cycles.

• First and foremost, it is important to reflect on what is most important to measure to


track progress and guide iterative changes to achieve the improvement aim. Typically,
the most important indicators are those that monitor the processes and outcomes that
have been targeted for improvement in the improvement aim (see Table 3).
• If stakeholders can identify meaningful indicators based on the improvement aim, a
next step is to check whether these or similar indicators are currently available in the
health information system (e.g. in facility registers, in reporting forms that feed into the
national HIS). Sometimes, an indicator of interest may be available in a facility register
but is not reported up into the national HIS, in which case stakeholders will need to
decide whether it is worth the effort to manually review a sample of registers each week
or month to extract data to calculate the indicator. For QI initiatives operating at large
scale it is generally preferable to select indicators that are available in the national HIS
and can be quickly calculated from facility reporting forms and across multiple sites
without the need for manual review of primary data sources.
• If stakeholders are not able to identify meaningful indicators based on the improvement
aim, then it is recommended that stakeholders check if there are appropriate MNCAH
QoC indicators in existing national or subnational monitoring and evaluation
(M&E) frameworks (e.g. as part of regional QoC operational plans). Most national or
subnational M&E frameworks have associated indicator reference documents/sheets
with details on indicator metadata, including indicator definitions and data sources.
• If the in-country M&E framework(s) or recommendations do not include relevant QoC
indicators, stakeholders are encouraged to review other resources such as global
MNCAH monitoring frameworks, WHO-recommended QoC indicators (see Table 2
and the Annex), peer-reviewed publications, or other technical reports to identify
appropriate indicators. WHO has, for example, developed a Maternal, newborn,
child, adolescent health and ageing (MNCAHA) and QoC indicator metadata toolkit
that provides a searchable database of indicators across life stages, including their
metadata (19).

3. Selecting quality of care indicators to monitor and guide improvement 21


3 3.4.2 Develop specific QoC indicators, including new indicators if
necessary
Selecting It is not unusual for stakeholders to define an improvement aim for which no established
quality of care
indicators
indicators already exist. In this case, new indicators should be developed by stakeholders
with a combination of subject matter (e.g. clinical), QI and measurement expertise
(sometimes one person may possess expertise in all three areas). In general, QoC
indicators should be as simple as possible, should be feasible to collect without significant
added effort, and should be closely aligned with the improvement aim (i.e. measure the
most important care processes and outcomes that are targeted in the improvement aim).

Important considerations and criteria for selection of strong QoC indicators are listed
below. Sometimes it may not be possible to satisfy all such criteria. However, over time,
as managers and stakeholders acquire increasing expertise and experience, they will
usually find it easier to meet these indicator selection criteria.

• Relevance to the QI aim: The indicator must be closely aligned with the specific QI aim,
ensuring it measures what is intended to be improved.
• Measurability: The indicator must be measurable with clearly defined metrics or
standards, allowing for objective assessment of improvement (or decline). Data
collection for the indicator should be feasible within the health care setting using
available resources and technology.
• Validity: The indicator should validly reflect the QoC construct it is intended to measure.
To the extent possible, it must be based on established evidence or best practices in
health care.
• Reliability: In the context of a multi-site QI initiative the indicator should consistently
produce reliable data across different settings, providers and patient populations. The
results should also be reproducible under similar conditions.
• Sensitivity to change: The indicator should be sensitive to changes in care processes
or outcomes, allowing organizations to detect improvements (or declines) associated
with interventions.
• Actionability: The indicator should provide actionable information that helps health
workers improve care. Data for the indicator should be available promptly enough to
drive real-time improvements.
• Patient-centred: Indicators of patient-centred care should focus on aspects of care that
matter most to patients, aligning with patient values, priorities and needs.
• Comparability: In a multi-site QI initiative, the indicator should use standardized
definitions to facilitate comparison and benchmarking across sites and to motivate
improvement and friendly competition.
• Equity-focused: Where relevant, the indicator should account for variations in care
among different patient groups and highlight disparities in quality of care (including
care processes and outcomes). Often QoC indicators can be disaggregated by equity
‘stratifiers’ to identify populations receiving poorer quality of care, and to inform and
monitor interventions to reduce disparities in quality of care.
• Feasibility of collection and reporting: The process of collecting, analysing and reporting
on the indicator should not place an undue burden on health care providers. The
indicator should be straightforward to collect without requiring complex or expensive
infrastructure.

22 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
3.4.3 Balance indicator types
Once a list of candidate indicators has been established, determine the appropriate
3
balance between process indicators (e.g. adherence to clinical protocols) and outcome Selecting
quality of care
indicators (e.g. change in health status) needed to track progress toward and guide indicators
iterative changes to achieve the improvement aim (see Chapter 2 on indicator types). Also
consider the inclusion of input indicators, as appropriate, and the inclusion of balancing
indicator(s) (see Table 3). A balancing QoC indicator typically measures a process or
outcome that is not targeted in a QI initiative, to verify that changes to improve a specific
health care process do not cause an unintended deterioration or worsening of other care
processes and outcomes that are not the focus of the improvement effort. It is important
to ensure that stakeholders provide input on the balance of indicators they believe can
best measure and guide progress toward the QI aim and identify unintended worsening
of quality of care in areas that are not a focus of the QI initiative.

3.4.4 Consider measurement feasibility in the local context


Indicators should be feasible to monitor in the programme setting. In many settings,
health facility registers include a limited number of QoC data elements and individual
standardized patient records may not exist. As part of the selection of QoC indicators,
it is important for stakeholders to consider what QoC indicators currently exist in their
RHIS and will be feasible to monitor on a regular basis (see Chapter 4 on assessing the
readiness of HIS to measure priority indicators).

If a new measurement method or data source is proposed, stakeholders should carefully


consider the extra effort involved in collecting these data to determine whether this effort
is justified. As a general rule, stakeholders should try to use and strengthen existing data
systems rather than introducing parallel data collection processes, and should align the
selection of indicators with national, regional or district-level recommendations.

3.4.5 Define a measurement method for every selected QoC


indicator
It is important to clearly define and specify a
measurement method for every QoC indicator NOTE!
selected in a QI initiative. Table 4 outlines key
information that should be defined for every QoC Chapter 4 provides
indicator that is selected for use in a QI initiative. guidance for developing
It can be adapted based on the needs in a specific more comprehensive
context. It is important that all QI team members metadata fields for QoC
and managers have a solid understanding of the QoC indicators that are adopted
indicators and associated measurement methods as part of national and
that will be used to monitor progress and guide subnational HIS.
iterative changes during a QI initiative.

3.4.6 Defining data sources for QoC indicators that do not exist in
the HMIS and considerations for their incorporation
In some instances, QoC indicators that are important to measure temporarily during a
time-limited QI initiative may not be appropriate to monitor over the longer term in a

3. Selecting quality of care indicators to monitor and guide improvement 23


3 national HIS. This is especially relevant for settings in
which the HMIS does not include individual patient
notes that document more complex processes of
NOTE!
Selecting Chapter 4 includes
quality of care care. In these instances, it may be appropriate to
define a temporary data source (measurement recommendations for
indicators
assessing and adapting
method) for selected QoC indicator(s) during the
facility HMIS data sources
QI initiative. For example, stakeholders may decide
to identify relevant QoC
to: temporarily add a column to an HMIS register; indicators that may be
introduce a temporary one-page data collection available at the facility
tool to complement existing HMIS data elements; level but not aggregated at
or introduce a simple standardized patient chart national and subnational
targeting the process of care they seek to improve. level (e.g. service registers,
Chart audits may be a temporary method to measure patient charts) and to guide
more complex process indicators (e.g. the proportion temporary local adaptation
of women with severe pre-eclampsia treated of existing data sources
appropriately). However, unless patient charts to permit calculation and
include essential information in a standardized measurement of QoC
format, it can be difficult to extract the necessary indicators that are not
available in existing HMIS
data to calculate standardized QoC indicators for
data sources.
more complex care processes.

Indicators of patient experience of care are important measures of the person-centredness


of care that are not available in the HMIS in most settings. There is increasing work to define
important domains of patient experience of care in MNCAH services and several patient
questionnaires capturing priority domains have been validated in low-resource settings in
the past decade. These questionnaires may be adapted and used in a QI initiative to regularly
measure patients’ experience of care and guide interventions to improve patients’ experience
of care (see section 5.5). Qualitative methods are also important for understanding patients’
experience, needs and priorities. The use of qualitative methods for assessing and improving
quality of care, including patients’ experience, is discussed in section 5.4.3.

In some instances, QoC indicators selected for monitoring in a time-limited QI initiative will
be highly relevant for incorporation into the HIS for sustained monitoring for the purpose
of QP, QI and QC. For example, in some settings the cause of a maternal, newborn or child
death is not captured as a standardized data element in the national HIS. However, this
is vital information to inform QP, QI and QC processes. Chapter 4 reviews considerations
for incorporating QoC indicators into a HIS.

3.4.7 Additional considerations


a. Include people with different expertise
• When selecting QoC indicators for the purpose of improving MNCAH care, it is important
to include stakeholders with essential areas of expertise and to consider the maturity
of the HIS in the local context.
• The selection of QoC indicators, measurement methods and data sources should
be guided by a team that combines expertise in MNCAH technical subject matter,
measurement and QI. The team should include individuals with deep knowledge of
the HIS, including data elements in the existing HMIS (e.g. health care facility registers,
patient records, facility monthly summary reports, aggregate data from the district
health information system (DHIS2) and other data sources).

24 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Table 4. Example of essential information to be defined for each selected QoC indicator

Measurement Responsible
Indicator Indicator definition Numerator Denominator method Frequency person(s)
Birth companion of % of women who wanted and # of women who wanted and # of recently delivered Exit survey Monthly Maternity QI
choice had a companion of choice had a companion of choice women who completed team
supporting them during supporting them during exit survey
childbirth in the health facility childbirth in the health
facility
Immediate % of women who gave birth in # of women who gave birth # of women who gave Facility labour Monthly Maternity QI
administration of a a health facility who received in a health facility who birth in the facility and delivery team
uterotonic after birth a prophylactic uterotonic received a prophylactic register
immediately after birth for uterotonic immediately after
prevention of PPH birth for prevention of PPH
Contraceptive % of adolescent clients # of adolescent clients # of adolescent clients Review of facility Monthly Family planning
counselling and (15–19 years) counselled (15–19 years) who were (15–19 years) attending registers and nurse or
offer of a modern and offered a modern counselled and offered the health facility during client records adolescent
contraceptive method contraceptive method during a modern contraceptive the reporting period health QI team
to adolescents their health facility visit method during their visit

Note: PPH: postpartum haemorrhage.

3. Selecting quality of care indicators to monitor and guide improvement


25
indicators
quality of care
Selecting
3
3 b. Consider the needs of specific data users
• Increasingly and in many countries, subnational managers are supporting the design
Selecting and management of large-scale QI initiatives to increase the pace and reach of health
quality of care care improvement beyond ‘point-of-care’ QI efforts. These efforts are consistent with
indicators
global guidance that outlines key actions and actors at different system levels (1).
Fig. 8 shows the system levels at which different actors support QoC activities in most
health systems. The pyramid shape also depicts the decreasing number of QoC data
elements that are monitored at each higher level of the health system: most at the
front-line facility or community health worker level and least at the national policy-
maker or MNCH programme manager level.
• As discussed in sections 2.2.2 and 7.3.1, and shown in Fig.8, different actors supporting
QI initiatives in a health system may use QoC data for different purposes. It is important
to consider the needs of these actors when selecting QoC indicators for use in a QI
initiative.
– The members of a facility QI team will usually monitor important QoC outcome
indicators and a greater number of process indicators based on the specific
processes they are trying to improve at any point in time.
– A district manager leading a district-wide improvement effort across many
facilities may monitor the same QoC outcome indicators, but monitor a smaller
number of sentinel QoC process indicators across sites as well as district
management functions (e.g. commodities and human resources management)
and QI programme output measures such as the mentoring of facility QI teams
to guide effective management of the district QI programme (21) (referred to as
implementation milestones in the Network monitoring framework; see Fig. 6 ) (13).
– Policy-makers and managers at the national level will usually monitor an even
smaller number of QoC indicators in any single technical area, to track high-level
progress across multiple technical areas and achievement of broader health
system goals.
• The key users of QoC data in a subnational multi-site QI initiative are the front-line QI
teams and the managers or other expert stakeholders who support these QI teams and
oversee the multi-site QI initiative. For example, a subnational manager will need to
regularly monitor and analyse QoC indicator results across facilities to guide effective
management of a district-wide QI initiative (e.g. provision of tailored support to lower-
performing facilities, spreading learning from high-performing facilities).

c. Less is more
The primary objective of measurement in QI efforts is for learning and improving care. It
is important to strike the balance between “ideal” versus “good enough” QoC indicators.
There is no hard science on the number of indicators needed but often 4–6 carefully
selected QoC indicators are sufficient for monitoring and informing interventions to
achieve a single improvement aim. This is usually preferable to collecting ‘nice to know’
data that will increase the measurement burden for front-line teams and are not likely
to be used or to add much value for QI teams and managers supporting these teams.

26 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
3
Fig. 8. Pyramid of data collection and use
Fig. 8. Pyramid of data collection and use

Selecting
Global quality of care
level
indicators

National level

Subnational level
(region/province/district, etc.)

Health facility level

Source: Adapted from (21).

Table 5 outlines key actions and considerations at the national, subnational and facility
levels to support the selection of meaningful indicators for the purpose of improving care.

Table 5. Summary of key actions for national, subnational and facility


levels

National Subnational (regional/district) Facility


• Select a small set of • Guide selection of regional/ • Analyse local data on QoC
priority QoC indicators district-wide improvement and review subnational
for monitoring by aims and quality of care improvement aims
national managers indicators based on subnational to prioritize areas for
and stakeholders priorities and information on improvement (e.g.
based on national and local quality of care gaps and retrospective review of
global targets and health outcomes (e.g. mortality, QoC data in a child health
national priorities for morbidity) register)
improvement (e.g. as • Consider all domains of quality • Select a small number of
part of national QoC (i.e. safe, effective, timely, QoC indicators drawing on
MNCAH strategies and efficient, equitable, person- subnational QoC indicators
quality planning) centred) when selecting and facility-specific
• Develop a national improvement aims and QoC priorities
MNCAH QoC costed indicators • Establish and support QI
monitoring plan • Select QoC indicators based teams that include health
including roles and on improvement aims, information officers and
responsibilities at all measurement feasibility and the regularly monitor and act
levels of the health needs of managers and QI teams on QoC indicator results to
system improve care
• Define measurement methods
for all selected QoC indicators

3. Selecting quality of care indicators to monitor and guide improvement 27


3 3.5 Country example
The country example illustrates the process of selecting QoC indicators for a multi-facility
Selecting
quality of care
QI initiative in Kogi and Ebonyi states, Nigeria.
indicators

Improving care for women and newborns in Kogi and Ebonyi


States, Nigeria
In 2016, the State Ministry of Health in Kogi and Ebonyi States of Nigeria, with support
from partners including the USAID Maternal and Child Survival Program, launched
a QI initiative in selected local government areas to improve care for mothers and
newborns. State-level data demonstrated persistently high rates of institutional
maternal and neonatal mortality and morbidity, with the majority of deaths occurring
on the day of birth. HMIS and other sources of data demonstrated many gaps in
quality of childbirth and postpartum health care services for women and newborns.

In each state, the State Ministry of Health convened a consultation with key
stakeholders, including local government area and facility representatives, to
prioritize areas for improvement (aims) based on the WHO standards for improving
quality of maternal and newborn care published in 2016 (15).

Stakeholders selected two improvement aims focused on improving adherence with


evidence-based high-impact care processes for pregnant women and newborns on
the day of birth, to try to reduce maternal and perinatal mortality and morbidity.
Based on the selected improvement aims, stakeholders identified a small number
of QoC indicators (process and outcome) for monitoring by QI teams and local
government area managers.

To inform selection of QoC indicators that were feasible for monitoring in the
programme setting, the State Ministry of Health health information officers mapped
existing QoC data elements in the standardized facility labour and delivery and
postnatal care registers, and facility monthly health information reporting forms.
MNH programme managers, health information officers and QI experts worked
together to select a small number of QoC indicators (process and outcome) based
on existing data availability and the processes and outcomes prioritized to achieve
the two programme aims.

Although most selected QoC indicators could be measured via existing data in facility
registers, stakeholders elected to measure a small number of QoC indicators that
they considered vitally important but for which there was no existing data. They
recommended that these indicators be monitored by adding a column to the labour
and delivery and postnatal care register. These indicators are noted (*) below.

28 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Childbirth and early postnatal care for pregnant women 3
Improvement aim (tailored in each facility to include a target based on baseline Selecting
performance): Improve adherence with evidence-based best practices for routine labour quality of care
and delivery and early postnatal care from X% (baseline) to Y% within 10 months among indicators
pregnant women giving birth in 91 facilities in Ebonyi and Kogi states (this improvement
aim was selected to contribute to a broader set of interventions to reduce preventable
maternal complications and stillbirths)
Priority clinical processes Quality of care indicators
(indicators in italics required addition of a
column in the facility register)
Labour and delivery care Process indicators
• Document fetal heart rate (FHR) on • % women with fetal heart rate
admission to maternity documented on admission*
• Document blood pressure of pregnant • % women with blood pressure
woman on admission to maternity (early documented*
detection of pre-eclampsia) • % deliveries monitored with partograph*
• Monitor progress of labour and maternal/ • % women delivered with companion of
fetal well-being during labour to choice*
guide care and detect early signs of
• % women receiving prophylactic
complications
uterotonic in third stage of labour
• Facilitate presence of birth companion of
choice if desired Health outcome indicator
• Administer uterotonic immediately after • Stillbirth rate (intrapartum)
birth to prevent PPH

Postnatal newborn care

Improvement aim (tailored by each facility to include a target based on baseline


performance): To improve adherence with evidence-based best practices for early
postnatal care for newborns from X% (baseline) to Y% for every newborn within
9–12 months (this aim was selected to contribute to a broader set of interventions to
reduce pre-discharge newborn mortality)
Priority clinical processes Quality of care indicators
(indicators in italics required addition of a
column in the facility register)
Postnatal care Process indicators
• Essential newborn care (cord care, keep • % newborns placed skin-to-skin with
warm/skin-to-skin, breastfeeding) mother*
• Initiate breastfeeding within one hour of • % newborns initiating breastfeeding
birth within first hour*
• Chlorhexidine to umbilical cord per • % newborns provided chlorhexidine cord
Nigeria protocol care per protocol*
• Counsel and administer newborn • % newborns immunized before discharge
immunization (BCG, IPV/OPV) per national protocol (BCG, OPV/IPV)
• Keep women and newborn in facility Outcome Indicators
after birth until at least 24 hours
• Pre-discharge neonatal mortality rate

Note: BCG: Bacillus Calmette-Guérin vaccine for TB; IPV: inactivated polio vaccine; OPV: oral
polio vaccine; PPH: postpartum haemorrhage.

3. Selecting quality of care indicators to monitor and guide improvement 29


4
Chapter

Assessing and
strengthening health
information systems to
measure and monitor
prioritized quality of care
indicators
4 4. Assessing and strengthening health
Assessing and
strengthening
information systems to measure and
monitor prioritized quality of care
health
information
systems

indicators

4.1 Key messages


■ A fit-for-purpose HIS is important for regularly measuring and
monitoring QoC to improve care.
■ Routine health information systems (RHIS) are usually the most
important data sources for regularly monitoring changes in QoC.
However, drawing on both routine and non-routine data sources,
and qualitative and quantitative data, enables a more complete
understanding of QoC.
■ Some QoC indicators that are important for monitoring quality
in a specific technical area for the purpose of QI (or QP and QC),
may not exist in a local HIS. Assessing the readiness of the HIS to
measure such indicators helps identify gaps and opportunities for
integrating these indicators.
■ Integrating new indicators in the local HIS requires careful
planning, close review of existing HIS tools and broad stakeholder
consultations to align with programmatic priorities, HIS review
cycles and national health sector plans.
■ Where possible, QI initiatives should avoid developing parallel HIS
for the purpose of measuring QoC, and should instead strengthen
and build on existing data systems and QoC monitoring strategies.
■ A decision to introduce new indicators in the local HIS should
be informed by whether the indicators are needed for long-term
monitoring. Indicators needed for time-limited QI initiatives do not
warrant the extra efforts needed to adapt the HIS; addition of such
indicators also risks overburdening the HIS.

4.2 Chapter overview


A fit-for-purpose HIS can provide both qualitative and quantitative data at the reporting
frequencies needed to identify problems and to track the effect of changes implemented
at various levels of the health system to improve MNCAH care.

32 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
This chapter reviews the different components of a comprehensive HIS, including the data
sources for monitoring and improving QoC across system levels for women, newborns,
children and adolescents. The chapter provides practical guidance on how stakeholders
4
Assessing and
can assess the availability of data for selected QI indicators in the existing HIS. Finally,
strengthening
the chapter reviews considerations and key actions for identifying and incorporating QoC health
indicators and data elements into the HIS for long-term monitoring (beyond the life of a information
specific QI initiative). systems

4.3 Key terms and concepts


The terms HIS, RHIS, and HMIS are sometimes used interchangeably in mainstream health
information literature and practice (Box 3).

4.3.1 Definition of key terms

Box 3. Definitions of common health information terminologies used


in the guide
• Health information system (HIS). A general term used to refer to the total
information system, comprising all systems that generate, capture, store,
manage, analyse, synthesise and communicate data related to the health of
individuals and the activities of organizations within the health system, for use
by a variety of stakeholders and for different purposes (23). A fully developed
HIS includes a range of subsystems – such as community health information
systems, home-based records, electronic medical records, health management
information systems, registries, laboratory information systems, pharmacy
information systems, logistics management systems, finance and insurance
systems, human resource information systems, household or population
surveys, health facility assessments, and other related data systems – which
may differ between countries.
• Health management information system (HMIS). A data system designed to
support planning, management and decision-making in health facilities and
organizations (23). The term HMIS is often used interchangeably with RHIS (23–25)
“though an HMIS may not include data on disease or health outcomes” (23).
The system of regular recording, reporting, analysis and presentation of health
facility data is known as the routine health information system (RHIS) (24).
“These systems generate data collected at public and private health facilities
and institutions, and at community-level health care posts and clinics – at
regular intervals of a year, at minimum. Most of the data are gathered by health
care providers, by supervisors, and through routine health facility surveys.
The sources of those data are generally individual health records, records of
services delivered, and resource health records including financial, commodity,
or laboratory records” (23).

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 33
4 • Health facility data recording and recording forms: In this guide, used
to refer to case notes, individual patient records, individual reporting forms
Assessing and (e.g. surveillance forms) and home-based records, or their equivalents, which
strengthening
health
can be used for patient care to collect or record patient-level information in
information a RHIS, whether paper-based or digital. They also include health facility
systems registers, which consist of a list or file containing uniform information about
individuals, collected in a systematic and comprehensive way, in order to serve
a predetermined purpose (26). Registers are typically used to collect or record
socioeconomic and demographic information from the client or patient; their
clinical history and diagnosis; as well as treatment/care plan and outcomes. The
health facility register can be used as the basis for tracking individual patient
health care processes and outcomes, especially when they exist as electronic
medical records. Information from health facility registers is often summarized
and sometimes tallied across other health facility registers to create summary
registers or tally sheets or reporting forms, for the purpose of reporting
aggregated patient/client data. Patient registries, on the other hand, are
a collection of information about individuals, usually focused on a specific
diagnosis or conditions. Registry data is stored in a database and can provide
health care providers and researchers with first-hand information about people
with specific conditions, both individually and as a group, and over time. Patient
registries are different from health facility registers in that the former collect
patient information that is disease- or condition-specific whereas the latter
collect information about the patient regardless of their condition or disease (27).
• Data element. The smallest named item of data that has a unique meaning and
can assume a distinct value (28). With respect to an HMIS, data elements might
include client name, gender/sex, diagnosis, etc. Data elements are associated
with “data types that define their form”. These can include simple data types
such as date, time, numeric value, or complex data types, such as addresses.
• Data. A collection of data elements, that convey specific information (e.g. %
of women who received antenatal care services during pregnancy in facility x).
Data may include any form of text, sound, visual or audio-visual recording (29).
• Data point. A piece of data representing one observation taken at a given point
in time (e.g. one cell in a data table showing antenatal care coverage in facility
x in a given month)
• Metadata. A structured reference data set that provides information about
other data (e.g. numerator and denominator of a specific indicator, or indicator
rationale, or indicator data source). They are the information needed to explain
and understand the data or values being presented (30).
• Dataset: A structured collection of related data on a given subject usually
presented in a data table, where rows typically represent individual records
or observations, and columns represent variables or features related to those
observations (ISO, 2014).
• Database. Structured system that allows data to be easily stored, accessed,
manipulated and updated (31).
• Information. Classified, organized and/or processed data that has some
meaningful value for the user (usually a result of data analysis) (32).

34 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
4.3.2 Components of a typical HIS with MNCAH data
Fig. 9 shows an illustrative example of an HIS and its subsidiary data systems that
4
constitute important data sources for QoC measurement and monitoring for the purpose Assessing and
strengthening
of improving MNCAH care. Stakeholders, including QI teams at national, subnational and health
service delivery levels, can use a variety of data sources to identify and examine QoC information
problems, define improvement aims and monitor QoC over time as they make changes. systems
The WHO framework and standards for country HIS (33) provides a detailed explanation
of some of these data sources.

Table 6 provides an illustrative example of the types of HIS data sources that a QI team
could use to plan and monitor a QI initiative to improve care for diarrhoea in children in
a setting of high diarrhoea mortality.

Table 6. Example of data sources from various components of the HIS to monitor
quality of care for children with diarrhoea

Improvement Indicator
aim classification Indicator name Data source in the HIS
Reduce • Input • Stockouts of oral • Logistics management
morbidity and indicators rehydration solution/zinc information system and
mortality from inventory monitoring
diarrhoea in sheet
children under
5 years old • % health workers trained in • Human resources
diagnosis and management information system and
of diarrhoea training registers
• Process • % children with diarrhoea • HMIS data (e.g. case notes,
indicators treated with oral rehydration health facility registers)
solution and zinc
• % children with diarrhoea
assessed for severe
dehydration
• Outcome • Institutional case fatality • HMIS data (e.g. case notes,
indicator rate related to diarrhoea health facility registers, or
civil registration and vital
statistics system)

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 35
4 Fig. 9. An illustrative example of a typical HIS with different subsidiary
Fig. 9. An illustrative example of a typical HIS with different subsidiary data systems that can act as potential data
data systems
sources to supportthat
QI, QPcan
and act
QC as potential data sources to support quality
improvement, quality planning and quality control
Assessing and
strengthening Nationallevel
health
information National aggregated data (e.g. in DHIS2)
systems

Subnationallevel

Summary data for the regions/


Registry systems provinces/states/or equivalent
(e.g. in DHIS2)

Patient/client
registries
Summary data for the districts/counties/
or equivalent (e.g. in DHIS2)
Health facility registry
(geospatial info. systems)

Health worker registry

Civil registration and


vital statistics

Aggregated data
Clinical trial registry (summary forms, aggregated
data in DHIS2, etc) Client- and caregiver- reported Providers'
health outcomes and experience
experience of care (cellphone of care
interviews, client surveys, etc)

Health facility level


Clinical support
data systems Periodic health Management support
Patient demographic
or b facility assessments data systems
Radiology information
system Patient clinical Hospital quality of Billing system (finance
assess care assessments and insurance systems)
Laboratory
information system Paper-based Patient preventive Digital Harmonized health Human resource
patient data care applications facility assessments management system
Pharmacy collection for patient
information system and recording data collection
forms (clinical Patient diagnosis and recording Service delivery Logistic management
case notes, facility data (e.g. electronic indicator survey system
register, etc) medical records)
Patient treatment and Emergency obstetric Patient referral
Mortality mon and newborn care information system
audits/reviews assessments
Patient outcome
Maternal death  Programme or project
reviews (MDR) evaluations
Patient discharge
Perinatal mortality  Service provision
audits (PMA) assessment

Paediatric death
audits (PDA) Patient/client

Other facility and


community-based Care team including clinicia
mortality audits/reviews 

Periodic community- Community/household level


based surveys

Home-based

Demographic health
survey

Village clinic
Multiple indicator
cluster survey
Patient/client
Community score card
for quality of care
accountability
Household
Community sentinel
surveillance systems
Community health
information system

36 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
4.4 Practical guidance
Different actions are required at national, subnational
Illustrative
actors
4
Assessing and
and health facility levels to assess the readiness of the strengthening
local HIS to provide data needed to monitor priority QoC health
A team of HIS and/or
indicators at each level. QI stakeholders who are setting up M&E managers and/ information
new QI initiatives at each level will need to track a set of systems
or technicians (or their
priority QoC indicators that may vary by number, purpose equivalent) at national
of measurement, and the need for their institutionalization or subnational level
in the local HIS. The guidance in this chapter is therefore depending on local
policies and governance
organized to account for these peculiarities across
structures.
the health system hierarchy. However, there are some
key principles that should underpin the process of HIS
assessment and adaptation for QoC monitoring.

Leveraging the existing HIS to support QoC monitoring activities is usually a more
sustainable solution to improving the availability of QoC data than introducing short-lived
parallel data systems. This is true for QoC indicators prioritized for long-term monitoring
by national and subnational actors, as well as the indicators that are specific to a short-
term QI initiative that does not require that the indicators be institutionalized in the local
HIS for long-term monitoring.

Strengthening the existing data system can also decrease the reporting burden for health
facility teams and minimize data redundancies and reporting inefficiencies.

4.4.1 Key actions at national or subnational level


National and subnational actors may prioritize QoC indicators used in a QI initiative for
long-term monitoring and recommend incorporation into the HIS. However, the process
of incorporating newly prioritized QoC indicators into a local HIS varies depending on the
local context. Therefore, the guidance provided here is general and should be adapted
to fit specific local conditions.

a. Conduct a comprehensive HIS landscape assessment


Once a set of priority MNCH QoC indicators for long-term measurement and monitoring
by national and subnational QI stakeholders has been identified and agreed upon, it is
crucial to conduct a comprehensive HIS landscape assessment. This will help determine
whether the indicators and/or the data elements required to calculate them already
exist within the local HIS. Although there is no standardized approach to conducting
this assessment, a series of guiding steps can be followed to gather the necessary data.
This will allow stakeholders to determine which prioritized indicators can be collected
and monitored immediately, and which ones will require further work on the HIS for
institutionalization.

b. Define the metadata for priority QoC indicators


As noted in Box 3, indicator metadata provide additional information about the indicator’s
attributes such as its numerator and denominator, data source, data collection frequency,
etc.

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 37
4 Ensuring that there are clearly defined metadata for all priority QoC indicators is necessary
before conducting a HIS landscape analysis. This step is essential to ensure the primary
data recording, indicator calculation, reporting and analysis are standard across levels
Assessing and
(e.g. health facilities, districts, etc).
strengthening
health
information
For some priority QoC indicators, especially those already established in the existing M&E
systems framework for regular collection and reporting, the metadata may already exist although
it may be necessary to review and align them with emerging MNCH QoC measurement
guidance and frameworks at the global level.

Indicators may have more than 30 metadata fields. This guide recommends completion
of a minimum set of 10 metadata fields for each QoC indicator prioritized for national or
subnational level monitoring. These fields are explained in Table 7 and one illustrative
indicator (i.e. pre-discharge neonatal mortality rate by cause) is used as a running example
across the metadata fields.

Table 7. Recommended minimum set of metadata fields

Field name Definition and rationale


Indicator name • This is usually a short name attributed to an indicator e.g. Pre-discharge
neonatal mortality rate by cause.
• The name should distinguish the indicator from other indicators in the
same domain to avoid any confusion. For example, by definition, pre-
discharge neonatal mortality rate by cause is not necessarily the same
as neonatal mortality rate even though they are applied to the same
population group. The former is a subset of the latter.
Definition • This is where different measurement and technical attributes of the
indicator are defined for standard measurement and reporting.
• In the example, the definition of the indicator is: The proportion of
neonates up to 28 days of completed life who were born live in the
facility and died from specific causes prior to discharge from the facility
(excluding re-admissions for illness).
• A poorly defined indicator will be difficult to map in the local HIS. The
definition should be specific enough to help identify what is being
measured, in which population, the timeframe, and where, etc. The
definition should also clarify the technical terms included in the
definition. However, in some cases, detailed information about the
indicator is included in the numerator and denominator which are
explained further below.
Data type • The data type describes how the indicator value should be or is usually
reported in the HIS. In the example, the data type is a Rate as the
indicator name implies.
Technical area • When conducting the HIS landscape assessment it is recommended to
define the service line in which the indicator is usually measured and
reported. For example, sick child vs well child services. In the example,
the technical area would be Newborn health care after birth in a facility.

38 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Field name
User and
Definition and rationale
• Once the technical area is defined, it is recommended to establish
4
purpose of use consensus on who will use the data generated by this indicator and Assessing and
to what end. If the user and the purpose of using the indicator cannot strengthening
be determined, the inclusion of the indicator in the HIS landscape health
assessment would not be justified. information
• In the example, maternity health workers and members of the maternity systems
QI team would use the indicator to understand leading causes of
newborn deaths in their facility to inform planning and monitoring of a
QI initiative to reduce newborn deaths.
Disaggregation • This defines how the indicator can be disaggregated according to
specific profiles that may be of interest to different stakeholders, (i.e. by
geographical location, age group, sex, rural/urban, among others).
• In the example, the indicator data can be disaggregated by small and/ or
sick newborns, or by the level of facility care (i.e. level 2 or 3).
Numerator • The numerator is usually the actual number of people that experience
an event or items/objects that exhibit a particular trait or characteristics
in a specified population and period.
• In the example, the numerator is: # neonates up to 28 days of completed
life who were born live in a facility and died from specific causes prior to
discharge from the facility (excluding re-admissions for illness).
Denominator • The numerator is the total number of the population or items/objects of
interest from which the numerator was drawn.
• In the example, the denominator is: Total # live births
Individual data • This is where the individual data elements of the numerator and
elements denominator are specified. This is critical for comprehensive mapping,
which requires the examination of individual data elements and their
availability in different data sources, and to identify opportunities for
HIS adaptation to include missing or partially available indicators.
• In the example indicator, the individual data elements are:
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to prematurity
(excluding re-admissions for illness).
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to sepsis
(excluding re-admissions for illness).
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to asphyxia
(excluding re-admissions for illness).
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to other causes
(excluding re-admissions for illness).
– Total # live births.
Frequency of • This refers to how often data is reported to track the progress and
data reporting performance of time-bound QI initiatives or for routine monitoring for
QC purposes. The reporting frequency will depend on several factors,
including the purpose of measuring the indicator (e.g. QI or QC), the
urgency of the indicator data for making decisions, the available
resources, and the data source (i.e. RHIS, periodic survey, etc).

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 39
4 c. Develop, adopt or adapt an HIS landscape assessment tool
Once the metadata fields for all priority indicators have been completed, a standardized
Assessing and instrument can be used to collect different pieces of information from the local HIS. This
strengthening helps determine whether provision has already been made for these indicators or their
health
information constituent data elements to be collected, and whether they are already being collected
systems and reported, routinely or periodically.

The following four scenarios are possible:

• An indicator and all its data elements are available, as defined, in the local HIS (i.e.
provision has been made in the HIS for these indicators and their metadata to be
collected either routinely or periodically, using the same or different data sources
(see Fig. 9).
• An indicator with a name similar to what is being assessed already exists in the local
HIS, but the constituent data elements are different by definition. This is possible, for
example, where indicators recommended globally are being adopted for use at country
level but the standard of care they seek to measure has not been fully adapted to the
local context. For example, early initiation of breastfeeding can be defined as the %
babies born alive in a facility who are breastfed within one hour (60 minutes) of birth, but
a country that uses 90 minutes as the standard for timing of breastfeeding may have
the same indicator name in the local HIS but defined slightly differently as % babies
born alive in a facility who are breastfed within 90 minutes of birth.
• An indicator for which only a portion of the constituent data elements can be found
in the local HIS. Numerators are typically more difficult to find than denominators,
especially when the former consists of more than one data element. This is often the
case for process indicators that tend to measure a process of care involving more than
one activity or care pathway.
• The indicator being assessed is completely new and neither the numerator nor
denominator – including all individual data elements – exist anywhere in the local
HIS. This is typical for indicators that have been recently introduced as part of new
QI initiatives or indicators developed for emerging QoC measurement areas, such as
experience of care indicators.

Whether an existing HIS landscape assessment tool is adopted/adapted or a new one is


developed, it is vital to be cognisant of these possible scenarios and use a tool that will
help with judgements about the extent to which the indicators and their data elements
are available in the local HIS, and if available, the potential data sources that can be
leveraged for future monitoring of these indicators.

This being the case, the starting point should be to check if there is already an HIS
landscape assessment tool that has been developed locally for the same or similar
purpose. There are also some programme-oriented tools that have been developed
globally such as the Every newborn-measurement improvement for newborn and stillbirth
indicators (EN-MINI) tools for routine health information systems (34), and the Maternal,
newborn, child and adolescent health RHIS country mapping template (35).

This guide describes how to use a new HIS landscape assessment tool developed by WHO
specifically to map QoC indicators and their constituent data elements in the local HIS
using either routine or non-routine data sources. This tool is entitled: Health information

40 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
system landscape assessment (HISLA): a tool for assessing the feasibility of collecting,
reporting, and using quality of care indicators, or the ‘HISLA tool’ for short. As described
in section 1.2.2, the HISLA tool was iteratively tested and refined in five countries. It is
4
presented in the HISLA tool, which also contains some instructions on how to use it. In Assessing and
strengthening
summary, the tool consists of four sections: health
information
1. Introduction: outlining the purpose of the HISLA tool and its components. systems
2. User instructions: providing step by step instructions on how to use the tool
3. Indicator mapping form: to be used to map each priority QoC indicator against the
local HIS based on a suite of metadata.
4. HIS tools and linkage form: an adaptable form that can help capture information about
different data recording, reporting and visualization tools being used for a specific
technical area (e.g. child health).

The country example included at the end of this chapter provides an example of the
analytical output that was generated from the HISLA tool based on the assessment
conducted in Uganda.

d. Conduct the HIS landscape assessment

Decide which assessment method to use


The health information system landscape assessment can either be done through a
desk review or may involve physical visits to health facilities. In some instances, both
approaches may be necessary.

• A desk review is recommended when there are limited resources to support site visits
to health facilities and the local HIS is well developed. In such settings, all or at least
most health facilities in the country will be using the same standardized data recording
and reporting tools, and the health facilities are reporting aggregated data using the
same platforms and procedure. A desk review will also usually require the most up to
date indicator reference sheets and protocols describing which indicators should be
collected in different types of health facilities. In the absence of such information, a
desk review could yield inaccurate information to inform HIS adaptation.
• Site visits to health facilities have several benefits relative to desk reviews. Visiting
health facilities offers an opportunity to better understand: how services are organized
for the technical areas of interest (e.g. maternal health) and at different service
levels; whether the organization of services mirrors the flow of health data; how data
recording/collection, aggregation and upward reporting are done; the format of the
tools used for these operations and any differences across health facilities; and an
initial glance at the possibility of measuring, reporting and using the MNCAH QoC
indicators of interest. Therefore, if resources are available, the most objective approach
to assessing the availability of QoC indicators and their metadata in the local HIS is
through site visits.

Plan for site visits


Before conducting site visits, a number of preparatory steps should be considered.

• Consider selecting a sample of health facilities with different characteristics. This is


important as assessing all health facilities will often not be feasible in resource-limited

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 41
4 settings. Furthermore, it is common for health facilities operating at different levels
of the health system (i.e. primary, secondary, tertiary health facilities), or health
sector (i.e. private and public health facilities) to collect and report data for different
Assessing and indicators depending on the scope of services they offer. For example:
strengthening
health – Health centres will usually not collect data on indicators related to inpatient care
information as they often do not offer inpatient services. Similarly, such services have fewer
systems service points compared to their higher-level counterparts and correspondingly,
they use fewer health facility registers and collect the least amount of data related
to MNCAH QoC.
– Also, depending on how the services are organized, some health facilities are not
mandated to report mortality data as they experience fewer deaths and/or refer
critically ill patients to health facilities at a higher level.
– Health facilities in the private sector may also collect specific data not collected by
their public sector counterparts or use different reporting mechanisms. Local HIS
are typically designed using a tiered approach to data collection and reporting.
Having a good mix of health facilities helps with understanding which data are collected
and reported by which type of health facility, and whether the tools being used are the
same/standardized.

• Consider informing health facility managers about planned site visits. The
modalities for this process will vary depending on the local protocols and procedures.
Sometimes, when the assessment is organized and/or supported directly by the
national ministry of health, direct communication to subnational authorities informing
them about the planned activities and the criteria for health facilities to be assessed
will trigger further communication to the leadership of the concerned health facilities
requesting permission for the activities to be carried out. The goal is to ensure that all
the leaders and managers at the relevant levels of the health system are aware of the
scope and nature of the HIS landscape assessment and are in support.
• Select and orient a team of assessors. The team of assessors can consist of HIS,
M&E and programme focal points representing the ministry of health. Where possible,
representatives from partner organizations can be invited to support the assessment.
The goal is to ensure that the team has a good balance of technical expertise related to
the programme of interest and HIS/M&E for complementarity during the assessment.
– Several teams may be required to carry out the assessment depending on the
number of health facilities to be visited and the distance between them. However,
each team should consist of no more than six people so that on the day of the
assessment, the movement in the health facility from one service point to the
next is less disruptive.
– All the members of the team should be oriented on the MNCAH QoC indicators to
be assessed, the methodology and purpose of the assessment, and how the site
visits are to be conducted. It is important for each team member to be familiar
with the indicators being assessed prior to the site visits as this knowledge will
help them ask the right questions and visit the most relevant service points in
the health facility. For example, knowing that there is an indicator that measures
access to play and educational material during hospitalization for children prior
to the health facility visit will prompt team members to ask if and how such an
indicator is measured and how the data is recorded and reported, if collected. A
desk review might therefore be an important step of the preparatory activities.

42 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Conduct site visits
Ideally, the assessment should start with a courtesy visit to the management of the
4
sampled health facility being visited. Assessing and
strengthening
• Here, the manager can be briefed on the purpose of the visit and the process that will health
be followed, including the expected level of service disruption and duration. information
systems
• The manager can be requested to assign a staff member, as available, to take the team
of assessors through the various service points that patients seen at the health facility
come into contact with until they have been discharged, either dead or alive/cured.
This includes both inpatient and outpatient services.

As described earlier, this process offers an opportunity to examine which data are
collected where, how, and using which tools. Box 4 provides a summary of a generic
process that can be followed to conduct health facility assessments using child health
services as an example.

Box 4. Mapping data flow in child health services


• Identify the first point of contact for the sick child visiting the health facility (e.g.
registration area).
• While at this service point, introduce the assessment team and explain the
purpose of their visit.
• Continue by asking the health care worker:
– To describe the scope of services they provide there, the kind of information
they collect about the child, and where they record the information.
– To provide a copy of each data collection or recording tool used at that service
point to capture information about the child. If there are no extra copies for the
assessment team to take away, request to take a photograph of the front page
of the tool and blank pages of the tool with all the data elements/variables.
For electronic tools, request a soft copy if available.
– Whether the tool was standardized by the government and when it was last
updated, or if the tool is developed by the health facility, or if the tool was
developed for research purposes or through a special partner-supported
project.
– Whether there are sporadic shortages of supply for the tools and whether
sometimes improvised tools are developed. If makeshift tools are currently
in use, request a sample and take a copy or photographs, whichever is
appropriate or possible.
– The next service delivery point where the child is sent for further care.
– Where the information collected at the current service point is sent, if at all?
Some data are kept at the service point or sent directly to the records office
for tallying and reporting upwards.
• Move to the next service points and repeat items 2–3 until you have gone through
all service points.
• Continue to the well child service line and repeat steps 1–4 as applicable.

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 43
4 When moving from one service point to the next, it is important to write down some
notes regarding the services offered in that health facility, key observations made, and
the names of the data collection and reporting tools used. This information can be used
Assessing and
strengthening during a debriefing meeting which the teams should attempt to have before the actual
health indicator mapping process begins. The debriefing meeting allows teams to compare notes
information and determine whether there are any differences in observations that must be considered
systems when indicators and their data elements are eventually mapped. A sample template for
field observations is provided in Table 8.

Table 8. Field observation template

Name of the health facility:


Service level:
Team members:
Service point Data collection and reporting tools Comments
• … • … • …
• … • … • …
• … • … • …
• … • … • …
• … • … • …

Populate the HISLA tool


It is at this stage where the HISLA tool described earlier is populated. Populating the tool
during site visits would be cumbersome and potentially disrupt services as the mapping
process requires time.

• It is ideal to convene the same teams of assessors in a centralized location led by


the process owner. The entire indicator mapping process might require several days
to complete depending on whether the team is large enough to allow for smaller teams
to focus on specific technical areas, and upon the number of indicators and metadata
to be mapped. Indicators with complex numerators and/or denominators (and hence
multiple data elements) require much longer to map against the HIS. Conversely,
indicators with fewer data elements will be quicker to evaluate, as will be those that
are generally known to be new to the HIS.
• In principle, teams should aim to evaluate a small number of indicators in
one session. Experience shows that the mapping process often involves insightful
discussions between the programme and HIS/M&E teams, and that decisions on which
indicators/data elements are available/where in the HIS is not a clearcut process. Such
decisions should not be rushed, as recording accurate information and actionable
recommendations for each indicator in the tool, agreed through group consensus, is
important for the next steps of implementation.
• Populate the HISLA tool: Detailed instructions on how to complete the HISLA tool are
included in the tool. The aim is to ensure that all the necessary information needed to
make a judgment as to whether each prioritized indicator can be used in the short or
long term are captured in the tool.

44 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Analyse the HIS landscape assessment data
Summary results can be organized to show two important categories of indicators:
4
Assessing and
• Indicators immediately available for adoption and use: These include indicators strengthening
found to already exist in the HIS as defined, or those for which 100% of the data health
elements required to calculate them are provided for in the HIS. Therefore, zero to information
systems
minimal effort would be required to start using them in the short term to drive QI. For
example, an indicator may not be currently reported in the HMIS as a percentage or
rate, but if the data elements to calculate this indicator as a rate are routinely collected,
efforts to adopt the indicator in the immediate term might include extracting data
directly from different (or the same) sources to calculate the indicator as a rate.
• Indicators requiring extra effort to adopt and use: These would be indicators whose
data elements could not be found in the HIS (either in part or as a whole) and requiring
considerable effort to integrate in the HIS.

e. Plan for and implement HIS adaptations


The insights and comments from the HIS landscape assessment and data analysis
workshop should be used to derive and implement a plan for how to adapt the HIS to
collect and report the indicators not currently or only partially being collected that merit
long-term monitoring. However, the process of adapting the HIS to incorporate these
indicators or their data elements can differ across settings. This guide therefore only
highlights important considerations that can be adapted or expanded upon as needed. To
illustrate some of these considerations, Box 5 presents an example of standard operating
procedures in Kenya to review and adapt the HIS to meet evolving data needs.

Box 5. Kenya approach to adapting and strengthening the HIS


The Kenyan Ministry of Health’s approach to adapting the national HIS to meet new
data demands and align with evolving health sector plans has been methodical
and comprehensive. The goal has been to standardize data collection, streamline
reporting tools, and ensure that health facility registers capture only the most
relevant and necessary data. These efforts are critical in addressing the country’s
key health priorities, including MNCH, HIV/AIDS, malaria, and TB.
One of the central elements of the Kenya approach has been the integration of the
district health information system (DHIS2) in their national HIS. This centralized
platform has transformed how health data are tracked, collected and reported at all
levels of the health system. By integrating DHIS2, Kenya has enabled real-time data
entry and reporting, which facilitates better decision-making and accountability.
The introduction of unique patient identifiers within the DHIS2 framework has
also been significant, as it helps reduce duplicate patient records across facilities,
particularly in the management of MNCH, HIV/AIDS, malaria and TB services. Kenya
has implemented a master patient index (MPI), which ensures that each patient has
a unique identifier, eliminating issues related to duplicated records and enabling
better longitudinal tracking of patient outcomes at the individual level. This system
has proven essential for monitoring outcomes in MNCH and other critical health
services, where comprehensive and accurate data are crucial.

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 45
4 Another major initiative undertaken has been the revision and streamlining of
health facility registers. Previously, many registers used at health facilities were
Assessing and redundant or did not capture essential data. By revising these registers and
strengthening reducing their number, the Kenya Ministry of Health has been able to streamline
health data collection and ensure that only the most relevant data are captured. This
information
revision process not only improves the quality of the data collected but also
systems
reduces the administrative burden on health workers.
Kenya has also prioritized data harmonization through a series of workshops
designed to align health programme indicators and standardize the forms used
for data collection. These workshops bring together stakeholders from various
health programmes, ensuring that indicators are fully aligned, which promotes
data consistency and reduces redundancies. This alignment is critical for making
national-level reporting more efficient and improving the overall quality of health
data in the country.
A key component of Kenya’s health data standardization efforts is the periodic review
of the Health sector indicator manual. This review process is triggered by changes
in health policies, the National Health Sector Strategic Plan, or new data needs
identified by health programmes. The HIS Unit in the Ministry of Health is responsible
for initiating this review process by inviting various health departments, divisions
and units to submit proposals for new or redefined health indicators. Once these
proposals are received, a technical working group is convened to review, refine and
revise the indicators. The updated indicators are then incorporated into the manual.
Following the revision of the indicators, Kenya moved to the design and review of
data collection tools. A task force is convened to design the registers and reporting
tools based on the periodically updated manual, ensuring that the tools capture
all relevant data elements and are user-friendly. To further support health workers
in using these tools, instruction manuals are created to guide them through the
data collection process. Before these tools are rolled out nationwide, they undergo
pre-testing in selected settings to ensure they are practical and user-friendly in
real-world contexts. Feedback from these pre-tests is then used to make final
adjustments to the tools before their full national implementation.
Once finalized, the revised tools are printed and distributed to health facilities
across the country. To ensure a smooth transition, health workers are trained on
the use of the new tools, and a clear transition plan is developed and disseminated.
This plan helps guide health workers as they switch from the old data collection
systems to the new standardized ones.
The tools are then customized into DHIS2 to allow for seamless data entry and
reporting. This ensures that all the health data collected at facility and sub-national
levels can be efficiently integrated into the national HIS. By integrating these revised
tools into DHIS2, Kenya has improved the timeliness and accuracy of health data
reporting, which has strengthened decision-making at all levels of the health system.
The process of reviewing and updating Kenya’s HIS tools is not a one-time effort but
is instead part of an ongoing process. Every two years, the health sector indicator
manual and associated data collection tools undergo further reviews as guided by
the country’s Health Information Policy. These periodic reviews ensure that the
health data collection systems remain aligned with emerging health priorities,
policy shifts and changing data needs.

Source: (36).

46 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Technical considerations
The responsible team should explore how the HIS infrastructure, design and
interoperability need to evolve to accommodate new data points and functions. Key
4
Assessing and
actions may include the following. strengthening
health
• Ensure that the HIS can be scaled up effectively to handle new indicators. This may information
require enhancing its capacity for data recording, indicator calculation, reporting and systems
processing. This includes verifying, for example, that HMIS tools, such as case notes and
facility registers, can accommodate the additional data volume without compromising
the integrity of the tool. Adding missing indicators or metadata to the HMIS may involve
adding new columns to existing tables or creating entirely new tables, ensuring that
each field is assigned the appropriate data types and lengths.
• Identify any performance bottlenecks in the HIS that could be exacerbated by adding
new indicators, such as low-quality recording of patient information or delays in
reporting summary data.
• Implement version control of adapted tools to ensure that changes are well-
documented and reversible if needed.
• Align new indicators with existing data standards, such as the international classification
of diseases (ICD–11) codes, to ensure consistency and comparability across datasets.
• Develop validation rules to enforce correct data entry for the new indicators. This
could include defining acceptable value ranges, required fields or validation checks
for certain data types (e.g. numeric vs text).
• Ensure that the new indicators and their data elements are integrated in a way that
maintains compatibility with other systems, such as electronic medical records,
laboratory information systems, supply chain management systems, or national health
databases. This includes ensuring consistent data formats and reporting protocols.
• If data related to the new indicators needs to be transferred between different systems
(e.g. HMIS summary/reporting forms to DHIS2), data mapping and transformation
rules must be developed. This ensures that data is accurately transferred and correctly
understood across systems.
• In cases where data for new indicators are derived from multiple data sources and the
calculation of these indicators is rather complex, consider automating the calculations
(to minimize errors.

Operational considerations
Operational considerations focus on the practical implementation of an effort to introduce
new indicators in the HIS.

• A large-scale training or capacity strengthening programme may be required to


orient health facility teams on how to collect, input, manage and analyse data for
newly introduced indicators, include how to use the data as part of QI planning,
implementation and reviews. Tailoring training materials to different user groups is
important to account for their specific roles in the HIS. For example, frontline health
workers need a clear understanding of how to collect and report the required data,
while M&E staff require skills in data analysis and interpretation.
• Establish support systems to assist health workers and data clerks who face challenges
with the new indicators. Such support can be provided during supervisory visits or

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 47
4 onsite support visits by QI teams operating at national or subnational levels, and
troubleshooting technical issues or providing clarification on indicator definitions.
These programmes can be a good opportunity to reinforce the skills learned during
Assessing and
strengthening training and provide ongoing guidance, particularly in the early stages of adapting to
health the new indicators.
information • Insights from the HIS landscape assessment related to data collection, entry and
systems
reporting workflows can help identify where the new indicators can be integrated
smoothly and where adjustments are needed. Where necessary, workflows can be
adapted to incorporate new indicators without disrupting existing data management
processes.
• Consider implementing the new indicators in phases, starting with a pilot phase
in select facilities or regions. This allows you to test the changes in a controlled
environment and make necessary adjustments before rolling them out system-wide.
• Ensure that the introduction of new MNCAH QoC indicators is coordinated with other
linked vertical health programmes (e.g. HIV, TB, immunization) to avoid overburdening
health workers with multiple reporting requirements. Where possible, streamline data
collection across programmes.
• If the local HIS uses electronic health records or mobile health systems for collecting
non-routine data such as patient experiences of care, consider updating the application
to include fields for the new indicators. This may also involve modifying data entry
interfaces to ensure they remain user-friendly and efficient.
• Be mindful of the workload impact of the newly introduced indicators on frontline
health workers. If the new indicators add a significant data collection burden, simplify
where possible, or provide additional support, such as data clerks or digital tools to
streamline the process.
• Consider the reporting frequency of any new indicators (e.g. monthly, quarterly). The
frequency should be feasible given the resources available and the other data reporting
requirements already in place. For example, data collected through exit interview may
not be feasible to collect every month and consequently, the reporting frequency may
be quarterly at a minimum.
• Integrate new indicators into existing reporting cycles to avoid creating additional
timelines or deadlines that could confuse or overburden staff.
• Ensure that the data generated from the new indicators can be accessed at least in
near-real-time by relevant QI stakeholders and decision-makers.
• Consider creating feedback loops so that health facilities and data collectors receive
regular updates on how the data they are collecting for new indicators is being used.
This helps to foster a culture of data-driven decision-making and motivates continued
high-quality data collection.
• The initial HIS adaptation for newly developed indicators is a great opportunity to
ensure that the system remains adaptable to future changes in QI priorities or indicators.
Therefore, consider establishing processes and protocols for regularly reviewing and
updating the HIS as new QI priorities or QoC measurement requirements emerge.

Strategic considerations
Strategic considerations are critical for ensuring that the integration of new indicators
aligns with broader HIS goals, policies and long-term sustainability.

48 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
• In many LMICs, HIS adaptations – also known as HIS reviews – are carried out in three-
to five-year cycles. Where the cycles are infrequent, the integration process might not
be timely to support QI programmes. It may therefore be important to adjust QI plans
4
Assessing and
and strategies at national and subnational level to include prioritized indicators that strengthening
are only feasible to measure and monitor immediately, and then include the missing health
indicators in the next HIS review cycle. information
systems
• Consider the long-term financial implications of adding new indicators. This includes not
only the costs of initial implementation (e.g. HMIS adaptation, training) but also ongoing
costs related to data collection, system maintenance, staff support and related upgrades.
Identifying sustainable funding sources, whether through government budgets, donor
support, or partnerships with nongovernmental actors, is essential to avoid disruptions.
• For sustainability, the new indicators should be prioritized for long-term monitoring
and fully integrated into the national HIS architecture rather than in a temporary or
parallel system. This ensures they become part of routine data collection and reporting
to inform ongoing improvement work at national and subnational levels.
• Consider setting up collaborations across different health programmes and
departments to prevent siloed data systems. For example, coordination between the
immunization, nutrition, early child health development, HIV and TB programmes with
the child health programme ensures that new QoC indicators are complementary and
do not duplicate efforts.
• When adding new indicators, ensure the collection, reporting and use are also
in compliance with national data privacy regulations, such as laws on patient
confidentiality, data security, and informed consent. Where prioritized indicators have
ethical implications, data collection processes should be designed to protect patient
privacy and dignity, and appropriate consent should be obtained before collecting
personal health data.
• Consider making sure that the new indicators are integrated in a way that allows for
data collection and disaggregation to include different population groups (e.g. gender,
age, socioeconomic status, geographic location) to help ensure equity in health service
delivery and outcomes.
• If time and resources permit, consider incorporating an iterative process where the
performance of the new indicators is periodically reviewed, and adjustments are made
based on the lessons learned. This could involve modifying indicator definitions,
improving data collection processes, or updating training and support for HIS users.

4.4.2 Key actions at the health facility level


QI initiatives are often led by facility managers or QI teams
embedded within a health facility. When these efforts Illustrative
are independent of broader multi-site QI programmes actors
managed by subnational or national health authorities,
they may require temporary, ‘project-specific’ indicators
Health information
that may not be immediately feasible to monitor routinely
officers, in consultation
through the local HIS. These QI initiatives are typically with the rest of the
focused on addressing localized quality issues within a facility-based QI team
single health facility and are often short-term in nature. and subnational HIS and/
As a result, the specific QI focus can vary across different or M&E managers as
facilities, requiring distinct sets of indicators tailored to needed.
each facility’s needs at any given time.

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 49
4 In resource-limited settings, adding extensive sets of indicators to the local HIS for routine
measurement and monitoring can be costly and burdensome, especially if there is no
plan for their long-term use.
Assessing and
strengthening This section offers practical guidance on how to assess and leverage the existing HIS to
health
information improve the availability of QoC data for supporting health facility-based QI initiatives.
systems This process is relatively less complex and resource-intensive compared to what has been
described for national and subnational level activities.

a. Define the indicator metadata fields


At health facility level, detailed metadata for each indicator may not be required especially
considering that some indicators might not be needed for long-term monitoring or
institutionalization in the local HIS. Defining the numerator, denominator, frequency of
reporting (i.e. to health facility-based QI teams) and data type (see Table 6 for definitions)
is often enough at this level of QI planning and implementation. If there is suboptimal
capacity at health facility level to define these metadata fields for the indicators of
interest, support can be requested at subnational level (e.g. a district or region, province
or equivalent), or from implementing partners supporting the health facility. The aim is
also to ensure that the indicators are well developed to provide useful insights that can
guide improvement at health facility level.

b. Review HMIS to determine measurement feasibility


The process of verifying whether QoC indicators prioritized at health facility-level
monitoring are feasible to monitor immediately is relatively less complex compared to
national or subnational level processes. Once the indicator metadata have been defined,
facility-based health information officers can check which of the indicator data elements
are already being collected and reported routinely in RHIS, and if the HMIS data sources –
such as case notes, health facility registers, tally sheets and summary reports – contain
the data elements needed to calculate the indicators of interest.

c. Adapt HMIS to accommodate missing indicators and/or metadata


If some data elements are missing in HMIS, they can be added to the relevant tools using
locally available resources. For example, health information officers can create a layout
for the additional data elements within the existing paper-based health facility registers
or case notes (e.g. by adding a column to a register, adding a standard clinical assessment
criterion to a patient case note, etc). It is important to decide on the placement and size
of new fields added to the forms to ensure that they fit seamlessly without overcrowding
the page. Consider using standardized formats for consistency.

d. Train facility-based staff around data entry


The addition of new data fields and elements to the already existing data collection and
recording forms will necessitate training staff members responsible for data entry on
the new data elements and how to accurately record, including clear instructions and
guidelines to maintain data quality. It is therefore important to regularly monitor the
quality and accuracy of data entered into the adapted tools.

50 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Table 9. Summary of key actions for national, subnational and facility
levels 4
Subnational Facility (including Assessing and
National (regional/district) primary care levels) strengthening
health
• Promote and support the • Support health facilities • Assess the data information
use of the national HIS by QI to select MNCAH QoC collection, recording and systems
managers across health system indicators in line with reporting forms available
levels to assess the availability health facility-specific or in the health facility to
of the data they need for multi-site subnational QI identify opportunities
their QI initiatives. This will initiatives for measuring and
minimize efforts to resort to • Support health facilities monitoring indicators
parallel systems to complete relevant prioritized as part of the
• Provide governance metadata fields for their QI effort at health facility
and leadership to help priority MNCAH QoC level
strengthen the national HIS indicators as part of their • In cases of limited
to accommodate indicators QI work capacity, request
needed to monitor priority • Support health facilities technical support from
MNCAH QoC indicators on an to analyse and make subnational health
ongoing basis decisions about which authorities to support
• Develop and disseminate indicators they need to QoC measurement and
national standard operating measure and monitor monitoring efforts for the
procedures and protocols locally and those they purpose of improvement,
on how to collect, collate, can monitor based on the including developing or
aggregate, report, analyse, existing HIS, and for how selecting appropriate
and use MNCAH QoC data long indicators and analysing
to support QI across health data as part of QI
• Provide any additional
system levels implementation
resources needed by health
facility managers

4.5 Country example


The country example included in this chapter describes the process that Uganda followed
to assess the readiness of the HIS to measure WHO-recommended priority QoC indicators
for measuring QoC standards for children and young adolescents in health facilities (see
Table 2). The assessment was part of a broader effort to strengthen the national HIS to
measure priority QoC indicators for children and young adolescents.

Assessing Uganda’s HIS readiness to collect and report on


paediatric and young adolescent QoC indicators
Context

In 2022, WHO developed a set of priority paediatric and young adolescent QoC
indicators (PQoC) for measuring QoC standards for children and young adolescent in
health facilities (see detailed metadata in Accompanying material 1). Through funding
provided by USAID, WHO worked with five countries (Kenya, Laos, Malawi, Sierra
Leone and Uganda) to support the uptake and integration of the 25 PQoC indicators
within their national HIS. This case study describes the process that was followed in
Uganda to assess the readiness of the national HIS to measure these indicators, and
how the indicators were prioritized for incorporation into the national HIS based on
the assessment results and national and subnational priorities.

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 51
4 Step 1: Assessing HIS readiness to measure prioritized QoC indicators

Assessing and The assessment of HIS readiness to measure the 25 PQoC indicators began with site
strengthening visits to a diverse sample of 10 health facilities in and around Kampala. These facilities
health represented varying levels of service delivery, and the range of services provided,
information
including two referral hospitals, two general hospitals, three level IV health centres,
systems
and three level III health centres. Two teams were formed, each consisting of WHO
and Ministry of Health staff, who were oriented on the 25 PQoC indicators and briefed
on the site visit process. In each facility, the assessment started with a courtesy call
to the management of the health facility. A dedicated health facility staff member
was assigned to take the assessment team through the various service points where
patients receive care (e.g. outpatient department, emergency unit, inpatient unit).
The assessment team was able to collect or take photos of all the blank tools used
to record data about the patient at each service point.

Step 2: Multi-stakeholder indicator mapping process

After the health facility assessment, the teams that visited the health facilities
reconvened for three days in a centralized place to map the 25 indicators and their
metadata against the recording and reporting tools that were reviewed and collected
during the health facility assessment. The mapping team included technical focal
points for child and adolescent health and HIS in the Ministry of Health, senior and
mid-level leadership from selected health facilities that were visited, and delegations
from USAID and WHO.

The HISLA tool (5) was used to assess whether specific indicators and their data
elements can be collected, reported and used in the national HIS. The data sources
for each indicator and/or the data elements were also noted for later use during
ensuing consultations regarding indicator adoption. The determination as to whether
an indicator could be adopted and measured in a specific timeframe was based
primarily on the availability of the indicator data elements in the HIS.

• Indicators recommended for immediate-term adoption were those for which


100% of the data elements required to calculate the indicator were collected in
the national HIS. These would typically require minimum effort to adopt and use
as part of ongoing quality control or time-limited QI initiatives.
• Indicators recommended for adoption in the medium to long term were those
for which less than 100% of the data elements required to calculate the indicator
were available in the national HIS. These indicators would require moderate to
considerable effort to integrate in the national HIS either by adding the missing data
elements or by developing other tools and mechanisms to collect the indicators.

During the mapping process, the teams were able to map PQoC indicators and the
corresponding data elements to 54 HMIS registers and 10 additional complementary
routine data recording forms. Data from these tools are aggregated and captured
in five reporting forms at health facility level, and the summary data are eventually
uploaded onto DHIS2. Fig. 10 provides a summary of information flow for child health
data within the Uganda HIS based on the mapping results. In summary, 11 (44%) of
mapped indicators were recommended for adoption and use in the short term, 11
indicators (44%) in the long term, and three indicators (12%) could not be assigned to

52 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 10. Overview of the Uganda HIS landscape in
relation to child health QoC data
any timeframe as the mapping team could not agree on the utility of these indicators
in the local context. Fig. 11 shows the number of indicators for which 100% and <100%
4
HMIS FORM 033b Assessing and
data elements required to calculate them are available in the Uganda national HIS.
strengthening
Fig. 10. Overview of the Uganda HIS landscape in health
54 HMIS of the Uganda HIS landscape in relation to child
Fig. 10. Overview
relation to child health QoC data
registers information
health QoC data HMIS FORM 105 systems

HMIS FORM 033b

54 HMIS HMIS FORM 108 DHIS2


registers
HMIS FORM 105

HMIS FORM 106a


10 HMIS FORM 108
complementary DHIS2
forms

HMIS FORM 097b


HMIS FORM 106a
10
complementary
forms

HMIS FORM 097b

Fig. 11. Availability of data elements for


Fig. 11. Availability of data elements
WHO-recommended for WHO-recommended
PQoC indicators PQoC
in the Uganda HIS
indicators in the Uganda HIS
Fig. 11. Availability of data elements for
No consensus
WHO-recommended 3
PQoC indicators in the Uganda HIS
<100% of data
11
elementsNo
available
consensus 3
100% of data 11
elements<100% of data
available 11
elements available

0
100% of data 2 4 3 8 10
11 12
elements available
Number of indicators
0 2 4 3 8 10 12
Number of indicators
Step 3: Multistakeholder briefing and planning meeting

After the indicator mapping process, multiple stakeholders – including decision-


makers from the Ministry of Health, senior technical officials from partner
organizations, and WHO across its three levels – met to review the findings from
the mapping exercise and agree on next steps. The mapping team presented the
findings and preliminary recommendations on indicators the country could adopt
in the short, medium and long term, as well as possible mechanisms to achieve
this. This was followed by reflections from stakeholders and a way forward from the
Ministry of Health officials.

4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 53
5
Chapter

Tracking and analysing


quality of care indicators
to guide improvement
5 5. Tracking and analysing quality of
Tracking and
analysing QoC
care indicators to guide improvement
indicators

5.1 Key messages


■ Regular measurement of selected QoC indicators by QI teams is
essential for tracking progress against improvement aims and for
guiding iterative changes to improve care.
■ A ‘run chart’ is a simple graphic tool to help QI teams plot
and analyse QoC indicator results over time to guide their
improvement efforts.
■ Managers of a multi-site QI initiative (e.g. a subnational health
manager) can use small multiple data visualizations to assess
differences across multiple sites working to achieve a common
improvement aim.
■ In addition to monitoring whether QoC indicator results are
improving as they make changes, QI teams can use qualitative
information to assess whether the specific changes they are
testing are feasible, acceptable and sustainable in the local
setting.
■ By segmenting data with an equity lens (e.g. ethnicity, age) actors
can monitor and guide changes to reduce disparities in QoC.

■ Both quantitative and qualitative information is important for


improvement and learning.

5.2 Chapter overview


To determine whether (or not) the changes they are making are associated with
improvements in care, QI teams must regularly monitor and analyse results for selected
QoC indicators. Qualitative information is also important to help teams understand
whether the changes they test (based on their analysis of the causes of quality problems),
are feasible, acceptable and sustainable in the local setting. This information helps teams
decide whether to abandon, adopt or refine and re-test specific changes as part of rapid
improvement cycles (e.g. plan, do, study, act). This chapter offers practical guidance
and resources to support the regular measurement, visualization, and analysis of QoC
indicator results over time by QI teams and by managers supporting a multi-
site QI initiative.

56 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
5.3 Key terms and concepts
Improvement teams often track QoC indicators using a ‘run chart’ to analyse the effects of
5
the changes they are making at specific points in time, assess whether care is improving Tracking and
over time and guide iterative cycles of change to improve care. analysing QoC
indicators

5.3.1 Why use a run chart to monitor improvement?


A run chart is a graphical representation of data displayed in chronological order. It is
a useful tool for visualizing the results of QoC indicators over time, to identify which
changes are contributing to improvements in care at specific points in time and to track
progress toward an improvement aim. Regularly plotting the results of QoC indicators
on a run chart helps QI teams understand whether the changes they are making are
associated with improvement.

A run chart is not the only way to display data graphically over time. For example,
sometimes health workers use bar charts to display QoC indicator results at discrete
points in time. However, because a bar chart typically displays data aggregated over
multiple time points – such as combining several data points to visualize a baseline
time period and a subsequent time period for comparison – it can be challenging to
assess whether changes implemented at a specific moment in time were associated with
improved QoC indicator results, a key goal of measurement in a QI process.

5.3.2 Practical illustration


Consider a hypothetical scenario involving an improvement initiative at district hospital
X, aimed at reducing the percentage of women who report experiencing verbal abuse
around the time of birth. The same dataset, spanning 21 months (January 2020 to
September 2021), is illustrated in Fig. 12 using a bar graph and in three hypothetical run
chart scenarios in Fig. 13 for comparison. Each figure marks the same point in time when
a specific intervention (or change) was introduced to improve care.
Fig. 12. Bar graph comparing aggregated data before and after
a change
Fig. was introduced
12. Bar graph to reducedata
comparing aggregated verbal abuse
before and reported by
after a change
women
was during
introduced delivery
to reduce verbal abuse reported by women during delivery

Change
introduced

100%
90%
Percentage of women reporting being
verbally abused during delivery

80%
70%
60%
50%
Average = 54%
40%
30%
20% Average = 27%
10%
0%
Before After

Note that:

5. Tracking and analysing quality of care indicators to guide improvement 57


Fig. 13 . Illustrative run chart scenarios (a, b, c) with trend data
for the period before and after a change idea was introduced to
5 • The bar graph in Fig. 12 suggests that the care improved over the 21-month period after
the introduction of the intervention. It appears that, on average, the percentage of
women reporting verbal abuse during delivery decreased from 54% before the change
Tracking and
analysing QoC was introduced to 27% afterward. However, this interpretation does not tell the full
indicators story.
• Fig. 13 presents three hypothetical run chart scenarios (a, b, and c) in the same district
hospital as shown in Fig. 12, with the same data set distributed differently over time in
each run chart scenario. Note that on average, the datasets in all three scenarios are the
same for the periods before and after the change was introduced. In the hypothetical
run chart in scenario (a) the hospital was already measuring a decline in the % of
women reporting verbal abuse in delivery before introduction of the change (thus the
decline is probably not due to the change). Additionally, although the percentage of
women reporting verbal abuse remains lower in the first four months after the change,
it can be seen that the indicator soon begins to increase again confirming that the
change is not associated with sustained improvement. The hypothetical run chart in
scenario (b) shows a consistent and continuous decline in the % of women reporting
verbal abuse after introduction of the change. The hypothetical run chart in scenario
(c) shows continuing variation in the indicator after the change was introduced. Among
the three hypothetical run charts (a, b, and c) in Fig. 13, it is only scenario (b) that
demonstrates improvement.

The key message here is that the same dataset can yield very different insights depending
on how the data are distributed and visualized over time. Closer examination of the data
displayed in Fig. 12 using a run chart format leads to a different interpretation of results.
Because an important purpose of measurement in improvement is to assess the effect
of iterative changes (interventions) on QoC in close to real-time, a run chart that displays
continuous data over time allows QI teams to analyse the effect of the changes as they
are introduced. Teams use this information to decide which changes should be adopted,
adapted or discontinued, and to monitor cumulative progress toward achieving specific
aim(s).

In essence, charting results of QoC indicators very


regularly (e.g. by the hour, day, week or month) is NOTE!
essential to understand variation over time within a
QoC dataset in a QI initiative. There are other more
complex methods of
The frequency of charting and analysing the data displaying and analysing
may vary by level of the health system at which QI datasets that are not
changes are being introduced. For example, facility- covered in this guide. Users
based QI teams that are testing iterative changes interested in learning more
will need to analyse data more frequently to guide about these methods and
their improvements than the subnational and additional materials on the
national managers supporting these teams. The use and interpretation of
run charts are encouraged
key responsibility of QI teams is to identify patterns
to explore additional
of variation in data over time and to explore their
resources (37–41).
underlying causes (37).

58 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 13 . Illustrative run chart scenarios (a, b, c) with trend data
for the period before and after a change idea was introduced to
reduce verbal abuse reported by women during delivery
Fig. 13. Illustrative run chart scenarios (a, b, c) with trend data for the
period before and after a change idea was introduced to reduce verbal
abuse reported by women during delivery
5
Tracking and
Hypothetical run chart a
analysing QoC
Change indicators
introduced
Before After
Average = 54% Average = 27%
100%
89%
90%
Percentage of women reporting being
verbally abused during delivery

80% 78% 67%


70% 65%
60% 54%
56% 52%
50% 56% 46%
40% 45% 45%
34%
30% 23% 23%
20% 25%
21%
18% 16%
10% 12% 9% 12%
0%
Jan-20

Feb-20

Mar-20

Apr-20

May-20

Jun-20

Jul-20

Aug-20

Sep-20

Oct-20

Nov-20

Dec-20

Jan-21

Feb-21

Mar-21

Apr-21

May-21

Jun-21

Jul-21

Aug-21

Sep-21
Hypothetical run chart b

Before After
Average = 54% Average = 27%
100%
90% 89%
Percentage of women reporting being

78%
verbally abused during delivery

80%
70% 67%
62% 60%
60% 56%
50% 54%
50% 40% 43%
40% 32%
43%
30% 41%
30% 23%
20% 23%
21%
10% 14% 5% 3%
2%
0%
Jan-20

Feb-20

Mar-20

Apr-20

May-20

Jun-20

Jul-20

Aug-20

Sep-20

Oct-20

Nov-20

Dec-20

Jan-21

Feb-21

Mar-21

Apr-21

May-21

Jun-21

Jul-21

Aug-21

Sep-21

Hypothetical run chart c

Before After
Average = 54% Average = 27%
100%
90%
Percentage of women reporting being
verbally abused during delivery

80% 75%
70% 67% 67% 65%
51%
60% 56% 56%
50% 54% 45%
40% 45% 45% 34%
30% 23% 23% 34% 24%
20% 12% 12% 23%
10% 12% 15%
0%
Jan-20

Feb-20

Mar-20

Apr-20

May-20

Jun-20

Jul-20

Aug-20

Sep-20

Oct-20

Nov-20

Dec-20

Jan-21

Feb-21

Mar-21

Apr-21

May-21

Jun-21

Jul-21

Aug-21

Sep-21

5. Tracking and analysing quality of care indicators to guide improvement 59


5 5.4 Practical guidance
The following section provides practical guidance on how to create a run chart using
Tracking and simple tools and resources typically available in low-resource environments, and how
analysing QoC
indicators
to apply basic rules to interpret QoC indicator results over time to guide improvement
efforts.

5.4.1 Display data in a run chart in a single site


While there are several interactive and automated software options to create run charts
that may be available for purchase through one-time fees or subscription plans, a run
chart can be created on a computer using Microsoft Excel or can be drawn by hand.
The essential steps for creating a run chart are outlined below using an example from a
QI initiative to reduce hypothermia (low body temperature) in newborns. Hypothermia
is associated with increased morbidity and mortality in newborns. Many of the most
common causes of hypothermia are preventable by implementing simple interventions
as part of the care of newborns including: placing a baby skin-to-skin with the mother
immediately after birth; placing a cap and wrapping a baby in a dry cloth; maintaining
a recommended ambient temperature in the delivery and postnatal area of a maternity
unit.

The same steps and principles from the example below are applicable for generating a run
chart in Microsoft Excel without inbuilt automation.

a. Prepare the chart on paper


Illustrative
Fig. 14 provides an example of the basic run chart actors
format. To produce such a chart on paper, the
following steps should be taken.
An interdisciplinary facility-
• Draw vertical and horizontal lines of almost equal based QI team including
representatives of all health
length to form an “L” shaped graph, making sure
worker cadres who support
to leave enough space on either side of the lines newborn care processes
for the title and labels of the run-chart graph. (directly or indirectly) in the
• Label the horizontal line with the unit of time facility (e.g. nurses/midwives,
or sequence in which the indicator data was physician, allied health
professionals, pharmacist,
collected (e.g. April, May, June, etc.; or Week 1, laboratory technician, health
Week 2, Week 3 etc.; or Day 1, Day2, Day3, etc.). facility manager, health
In this illustrative example focused on reducing information officer, patient
hypothermia in newborns, the QI team decided representative or advocate,
to measure the percentage of newborns with community health worker, etc.).
hypothermia at one hour of birth (temperature
< 36.5 centigrade) on a weekly basis.
• Label the vertical line with the name of the QoC indicator or indicators being plotted
and mark the line with a scale at equal intervals. For example, for an indicator measured
as percentages, a sensible interval for a scale can be 10% between two consecutive
numbers (i.e. 0%–10%, 10%–20%, 20%–30%, 30%–40%, …………90%–100%). In this
illustrative example, the QI team decided to use a vertical scale of 10% as shown in
Fig. 14.

60 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 14. Visual display of a run chart without indicator data

NOTE! 5
When deciding on the time interval for monitoring QoC indicators, it is important Tracking and
to consider the expected time interval to enable detection of a change in results analysing QoC
indicators
after introduction of a change or intervention.

Fig. 14. Visual display of a run chart without indicator data

Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)
100%
Percentage of newborns with hypothermia

90%

80%
70%

60%

50%

Fig. 15. Visual40%


display of a run chart with baseline indicator
data plotted against
30% time
20%
10%

0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks

• Next, plot the indicator data values against time in the order they were collected, and
then connect the data points on the graph with a line as shown in Fig. 15.

Fig. 15. Visual display of a run chart with baseline indicator data plotted
against time

Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)

Baseline period
100%
Percentage of newborns with hypothermia

90%

80%

70%

60%
50%

40%
30
30% 26 27 26 26 24 26
22 20 21
20%
10%

0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks

5. Tracking and analysing quality of care indicators to guide improvement 61


5
Fig. 16. Plot of indicator values against time with a baseline
b. Calculate a median and draw its value in the chart
median line
• Ideally, aim to plot 6–10 points of baseline data collected before starting your
Tracking and improvement efforts (39). This data will allow calculation of a baseline median. If no
analysing QoC
indicators baseline data are available (e.g. measuring a new indicator with no prior data), the
initial few data points can be treated as a de facto baseline to calculate the baseline
median.
• Draw the baseline median (labelled line) using 6–10 baseline data points as shown
in Fig. 16. See Box 6 for guidance on calculating a median. The baseline median is an
important reference for interpreting results applying the “shift rule”, one type of rule
for interpreting run charts. The shift rule is explained in section 5.4.2.

Fig. 16. Plot of indicator values against time with a baseline median line

Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)

Baseline period
100%
Percentage of newborns with hypothermia

90%

80%

70%

60%
50%

40%
30 29
30% 27 25 27 25
19 Baseline median = 25
20% 15 18 15
10%

0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks

62 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Box 6. Calculating a baseline median based on 6–10 initial data points 5
In basic terms, the median represents the middle value in a set of numbers, where Tracking and
analysing QoC
half of the numbers are below the median and the other half are above it (42). You
indicators
can quickly calculate the median using Microsoft Excel, or by hand. Assuming the
baseline period in the dataset shown in Fig. 16 spans from Week1 to Week10 (the
number of data points needed to determine the baseline median usually ranges
between 6 and 10), the median can be calculated as follows:

• Begin by sorting the numbers in the dataset in ascending order. The original
order is as follows: 15, 18, 15, 19, 27, 25, 27, 30, 25, 29.
• After sorting in an ascending order, the numbers are arranged as follows: 15, 15,
18, 19, 25, 25, 27, 27, 29, 30.
• If the dataset contains an odd number of data points (e.g. 7), the median is
simply the middle value. However, if the dataset contains an even number of
values, the median is calculated as the average of the two middle numbers.
• In this case, the dataset contains 10 values, so the median is the average of
the two middle numbers (i.e. the 5th and 6th numbers) which are, 25 and 25.
Fig. 17. Plot of indicator values against time with an extended
Therefore,
baseline median the median is calculated as (25 + 25) / 2, which equals 25. This value
line
is shown in Fig. 15.

c. Plot QoC indicator results as changes are implemented


• After plotting the baseline median line on the run chart, QoC indicator results should
be plotted at the agreed time intervals while changes are implemented to try to reduce
the percentage of newborns with hypothermia. Fig. 17 shows the QoC indicator results
from Week11 to Week25 as the QI team introduced changes to improve care.

Fig. 17. Plot of indicator values over time with an extended baseline
median line

Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)

Baseline period QI period


100%
Percentage of newborns with hypothermia

90%

80%
70%

60%

50%

40%
30 29
30% 27 25 27 25
19 Baseline median = 25
20% 15 18 15 17
10 9 10.5
10% 5
2 1 2 1 1 1 3 2 2 1
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks

5. Tracking and analysing quality of care indicators to guide improvement 63


5 d. Annotate the run chart
Adding labels or annotations to a run chart is important to show when specific changes
Tracking and were introduced by a QI team. This helps teams understand how specific changes influence
analysing QoC
indicators the results. It is also helpful to note the timing of any external events or additional
interventions implemented during the QI initiative that might have impacted the QoC
indicator results (e.g. clinical training in the improvement area). It is straightforward
to annotate a paper run chart. If you are using a computer with software that doesn’t
support annotations, you can print the chart and add the labels manually. In Fig. 18, the
run chart is annotated to show when specific events and ‘change ideas’ were introduced
to reduce hypothermia in newborns.

In this example, the 1st event was an on-site training for all maternity nurses on immediate
and early newborn care. This training was introduced in Week7 as part of the QI initiative
to ensure nurses and auxiliary nurses were aware of and gained skills to implement
evidence-based guidelines for newborn care. The skills covered in the training included
immediate skin-to-skin contact with the mother, drying and covering the newborn,
assessing breathing to rule out asphyxia and resuscitating within the first minute after
birth if needed, checking the newborn’s temperature one hour after birth, and supporting
initiation of breastfeeding within the first hour after birth.

The maternity QI team, which included nurses and midwives, used QI tools including a
process map and the 5 Whys to investigate why hypothermia rates were high at baseline.
They discovered that immediately after cutting the umbilical cord, most newborns,
even those not showing signs of asphyxia, were being taken away from their mothers
for examination, weighing and a sponge bath. This practice delayed skin-to-skin contact
with the mother and exposed newborns to an increased risk of hypothermia. When asked,
the health workers in the maternity explained that this had been the routine for at least
five years, as they believed it was important to examine, weigh and clean the newborn
right away. After some resistance, particularly from auxiliary nurses who provided sponge
baths for newborns, the QI team decided to change this practice. During Week10 they
introduced the 1st change idea, which involved placing newborns immediately after birth
on the mother’s chest, wrapping them in a clean cloth, and covering the baby’s head with
a hat brought by the family.

During the initial test period (Week10 to Week14), the team was surprised to see that
the percentage of newborns with hypothermia dropped quickly (see Fig. 18). The team
decided to adopt this change as the new standard for newborn care. However, some
newborns continued to have temperatures below 36.5°C (hypothermia). The team looked
for other causes and realized that the maternity ward’s windows were often open, even
during the rainy season, and staff sometimes used a window air conditioner on hot
days. To prevent the room from getting too cold, the team introduced a 2nd change idea
during Week14. This involved assigning one staff member per shift to check and record
the ambient temperature in the delivery room and make any needed changes to maintain
the ambient temperature between 22–26°C, as recommended by WHO (e.g. closing the
maternity windows, turning off a window air conditioning unit). During the second test
period (Week14 to Week25), the team noted an even greater decrease in the percentage of
newborns with hypothermia. Although it required some extra effort to check the delivery
room ambient temperature each shift, the QI team decided that this was a change they
could adopt as regular practice without too much difficulty, incorporating it into the unit’s
standard operating procedure.

64 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 18. Annotated run chart to show when specific events and changes
were introduced to reduce hypothermia in newborns 5
Tracking and
Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius) analysing QoC
Baseline period
indicators
QI period
100%
Percentage of newborns with hypothermia

90%
1st
change idea
80% Immediate skin
to skin care
70%
1st
event
60% Nurses trained 2nd
on newborn change idea
50% care Monitoring
ambient
temperature
40%
30 29
30% 27 25 27 25
19 Baseline median = 25
20% 15 18 15 17
10 9 10.5
10% 5
2 1 2 1 1 1 3 2 2 1
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks

5.4.2 Select and apply rule(s) to analyse data in a run chart


As QI teams plot data in a run chart in a QI initiative,
it is important that they regularly analyse the data to NOTE!
determine if results are due to random variation or to
the changes they are introducing to try to improve Two rules that are a bit
care. more technical and are
not covered in this chapter
Before delving into these rules and their application are the ‘runs’ rule and the
in QI monitoring, it is important to understand the ‘astronomical data point’
concepts of random and non-random variation in rule. More information
run charts. When data points for an indicator in a QI about these rules can be
initiative fluctuate up and down it doesn’t necessarily found in (40,43).
indicate that the changes being introduced to try
to improve care have had a significant effect on the health care process or outcome
being monitored. Sometimes, fluctuations in an indicator result may be due to random
variation. For an illustrative example of data patterns meeting the criteria for non-random
and random variation, see Fig. 19, which is based on a hypothetical dataset.

5. Tracking and analysing quality of care indicators to guide improvement 65


5 Fig. 19. Illustration of random and non-random variation as the basis for
understanding run-charts rules
Tracking and
analysing QoC 50%
Scenario 1 Scenario 2
indicators
45% Non-random variation Random variation

40% Astromical data point

35%
Indicator name

30%
25%
Shift
20%
15%
Trend
Median = 11.5
10%
5%

0%
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37
Time (days)

Upon examining the pattern of the indicator results in Scenario 2 in Fig. 19, it is clear that
the data points fluctuate randomly around the median line between Day24 and Day38.
This suggests that any changes made during this period probably had little to no effect
on the indicator result and that the pattern of data in Scenario 2 data indicates “random
variation.”

How can QI teams determine if the changes they are implementing are associated with
real improvement? Scenario 1 in Figure 19 shows a different pattern of data points around
the median line from day 1–23.

• Between Day16 and Day23, there are more than six consecutive data points that fall
(‘shift’) above the baseline median line. Note that depending on the indicator’s desired
direction, these data points might also appear below the median line.
• On Day13, there is an outlier data point situated above the baseline median line. Note
that this outlier could also be positioned below the median line, depending on the
direction of the indicator.
• Between Day1 and Day11, there are more than five consecutive data points showing
a consistent trend in a single direction (in this case, upward). Note that these data
points might also display a downward trend based on the indicator’s desired direction
of improvement.

These features are referred to as a shift, astronomical data point, and a trend, respectively,
when applying run chart rules. They suggest that the data pattern being observed
represents non-random variation in the dataset. In other words, there is a significant
likelihood that the observed patterns in data are associated with one or more changes
introduced during the period of measurement.

A shift or a trend of the plotted data in a run chart, in the desired direction of a QoC indicator,
are signals of likely non-random variation in the data set resulting from one or more
specific changes introduced during the measurement period. Evidence of non-random

66 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
variation in a QoC data set associated with the introduction of specific change(s) indicates
that the change(s) are the likely reason for the improvement in the QoC indicator result
during the measurement period. The application of these rules to interpret data in a run
5
Tracking and
chart are illustrated below. The shift rule is illustrated in a hypothetical QI initiative to analysing QoC
improve diagnosis and treatment of children with pneumonia. The trend rule is illustrated indicators
in a hypothetical example to strengthen blood pressure measurement in pregnant women
as part of a QI initiative to improve detection and management of hypertensive disorders
of pregnancy (e.g. pre-eclampsia, gestational hypertension).

Rule 1: A shift
Sometimes, the dataset plotted in a run chart during a QI initiative may shift above or
below the baseline median in a single direction (i.e. either all above or below the median).
A shift of six or more consecutive data points above or below the baseline median is
referred to as a shift (40). When a shift is observed in a run chart, it is likely that the data
pattern is due to the introduction of change(s) rather than due to random variation.

Fig. 20 shows the pattern of QoC indicators over time in a QI initiative to improve the
diagnosis and treatment of children with pneumonia. One of the aims and QoC indicators
in the QI initiative was to increase the percentage of children diagnosed with pneumonia
correctly treated with the appropriate dose of amoxicillin. The figure shows a run chart
plotting the pneumonia treatment QoC indicator over time. At baseline, before the start of
the QI initiative, the median percentage of children with pneumonia receiving the correct
amoxicillin treatment was 45%, based on a retrospective review of paediatric outpatient
records. Between Week11 and Week15, the facility QI team tested two sequential changes
to address two underlying causes of poor quality of pneumonia care that they identified
using two standard problem analysis tools, a fishbone diagram and a process map:

e) Lack of time in the busy outpatient clinic for providers to measure the respiratory rate
of children, making pneumonia diagnosis difficult in individual children.
f) Frequent stock-outs of amoxicillin in the facility, occurring weekly on average.

Two changes, shown in the annotated run chart in Fig. 20, were introduced by the QI team;

• 1st change idea: Trained auxiliary workers measure and document the respiratory
rate of all children before being seen by a provider, facilitating quick identification of
children with cough and rapid breathing.
• 2nd change idea: Initiation of a daily quick morning huddle with the facility outpatient
nurse in charge and the pharmacist to check and document the stock of essential
medications, including amoxicillin – with an automatic ‘pull’ order for amoxicillin when
the amoxicillin stock drops to less than 50 treatment courses (rather than waiting for
the usual monthly order to arrive).

5. Tracking and analysing quality of care indicators to guide improvement 67


5 Fig. 20. Illustration of a shift in a run chart demonstrating improvement in
the percentage of children correctly treated for pneumonia
Tracking and
analysing QoC Percentage of children with pneumonia correctly treated with amoxicillin
indicators
Baseline period QI period
100%
90%
84 82 85 82 82 85
Percentage of children treated correctly for

78 76 76 78
80% 75 74
70 New median = 78
70%

60%
2nd
55 53 change idea
49 50 49 47 Daily
pneumonia

50% 45 monitoring of
42 43 45 44 amoxicillin
40 supply Baseline median = 45
40%
30% 1st
change idea
Auxiliary workers
20% check respiratory
rate before
10% provider
consultation

0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks

As can be seen in Fig. 20, there is a shift in the data around Week13 in which six consecutive
data points are plotted above the baseline median line during the time period when the
QI team was introducing changes to improve diagnosis and treatment of pneumonia in
2nd
children. change idea
Daily monitoring
of amoxicillin
The shift shown in Fig. 20 demonstrates that the increase supply in the proportion of children
correctly treated for pneumonia from Week13 to 1st Week25 was likely due to the changes
introduced by the QI team at Week13 and changeWeek15 idea rather than the result of random
Training on
variation. Because the improved indicator resultof
measurement
the respiratory
remained stable until Week25 (6–7 weeks after ratethe
NOTE!
shift was measured) the QI team can calculate a
new median (in this case a 2nd median) to interpret Sometimes, a data point
indicator results as they introduce new changes after that is part of a visible shift
Week25 to try to improve the percentage of children may fall on the median, in
with pneumonia correctly treated from a median of which case it should not be
78% to a new third median greater than 90%. counted.

Rule 2: A trend
Sometimes, the dataset plotted on a run chart may show a clear pattern of consecutive
data points trending in a single direction (i.e. either going up or going down). Where there
are five or more consecutive data points increasing or decreasing in a single direction,
the pattern is referred to as a trend (40). When a trend is observed in a run chart, it is likely
that the data pattern is due to specific change(s) rather than a result of random variation.

There are two important principles to apply when identifying a trend in a run chart:

• Ignore the median line and simply look for five or more consecutive data points either
going up or going down.

68 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
• If two consecutive data points are of the same value, they should be considered as
one data point. 5
Fig. 21 shows the results of a QI initiative to improve the systematic measurement and Tracking and
documentation of blood pressure in pregnant women at every antenatal care (ANC) analysing QoC
indicators
visit. High blood pressure is associated with leading causes of maternal mortality and
morbidity including gestational hypertension, chronic hypertension, pre-eclampsia and
eclampsia. A blood pressure (BP) check at every ANC visit is crucial for the early detection,
diagnosis and management of hypertensive disorders of pregnancy.

As shown in Fig. 21, between Week10 and Week12, a training intervention and two changes
to the usual ANC process were implemented by the facility in charge and the QI team to
try to improve the measurement and documentation of BP at every ANC visit:
Fig. 21. Illustration
• Training of a trend
intervention: in a run
One-day charttraining
refresher demonstrating
for all ANC health care providers
improvement in the percentage of pregnant women checked
and auxiliary health workers on hypertensive disorders of pregnancy, including the
for blood pressure
correct methodduring ANC BP and the importance of a quality BP check at every
of measuring
ANC contact.
• Change: Addition of a new column in the ANC register to document a woman’s BP value
at every ANC visit (in addition to documenting the BP value in the woman’s ANC card).
• Change: Re-organization of ANC service flow to add a new step in the busy clinic: a
trained auxiliary health worker checks and documents every pregnant woman’s BP and
weight before the midwife sees the woman with weekly quality checks of measured
BP values in at least five pregnant women

Fig. 21. Illustration of a trend in a run chart demonstrating improvement in


the percentage of pregnant women checked for blood pressure during ANC

Percentage of women with blood pressure check during ANC


Baseline period QI period
100%
Percentage of women with blood pressure

90% 84 85
78 79 80 80
80% 75
70
70% 65 65 67
63 64 62
check during ANC

60 62 62 60 62 61
60%
50%

40%
Training
30% intervention
and change to
ANC patient flow and
20% BP documentation

10%

0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Time in weeks

As can be seen in Fig. 20, soon after the training intervention and changes to ANC
processes to improve measurement and documentation of blood pressure at every ANC
contact, there is a noticeable trend starting around Week12, when eight consecutive data

5. Tracking and analysing quality of care indicators to guide improvement 69


5 points show a progressive increase in the proportion of women who had their blood
pressure checked during ANC visits. Note that the data point on Week17 and Week18 have
the same value. As indicated earlier, if two consecutive data points are of the same value,
Tracking and
analysing QoC they should be considered as one data point. Therefore, the entire trend from Week12 to
indicators Week20 should be considered as having eight consecutive data points (instead of nine).

5.4.3 Display and analyse data across multiple sites


Subnational managers, such as district MNCAH managers, QoC focal points and health
information officers, are often responsible for overseeing QI efforts across multiple health
facilities. This includes monitoring quality of care results at the individual facility level and
across facilities in multi-site QI initiatives. In some settings, a subnational QoC committee
may be assigned to oversee subnational QI activities including a district or region-wide
QI initiative.

To help subnational managers quickly review and interpret QoC data from multiple
facilities, ‘small multiple’ data visualizations can be used (see Fig. 24 and the country
sample in section 5.5). This involves displaying run charts for a common QoC indicator
from different facilities side by side. These visualizations can be created on paper, in
Microsoft Excel, or using data visualization software such as Tableau.

Managers can use both subnational-level data (e.g. district data aggregated across
facilities) and individual facility-level data to inform management decisions. For example,
they might encourage high-performing facilities to share their successful changes with
other facilities during learning exchange visits, provide extra supervision to lower-
performing facilities, or organize study tours to top-performing facilities. They can also
monitor aggregated data to track and share overall progress at subnational level with
participating facilities, other regions and with national-level stakeholders.

By segmenting data with an equity lens (e.g. race, gender, ethnicity, etc.) subnational or
facility managers can also monitor how changes in QoC process and outcome indicators
differ across sub-populations. Based on these results, they can analyse and address root
causes of inequity in the local setting to guide interventions to reduce disparities in QoC.
For subnational equity analysis to be possible, equity stratifiers need to be collected at the
point of primary QoC data collection (e.g. ethnicity, marital status). The country example
in section 5.5 includes an example of segmenting QoC indicator results for adolescents.

5.4.4 Use qualitative data


Qualitative data is an essential complement to quantitative data in a QI initiative.
Qualitative data help inform improvement priorities, deepen understanding of local
causes of quality problems, enhance understanding of the reasons for specific patterns
in QoC indicator results, support analysis of promising changes, and help guide effective
management of activities in a QI initiative. Common methods for collecting qualitative
data include observations and focus groups with key stakeholders, including patients
and caretakers (e.g. parents of paediatric patients), QI team members and managers.

Alongside quantitative data, qualitative insights are essential for evaluating the feasibility
and acceptability of changes being tested by QI teams. These insights help teams
decide whether to adopt, refine and retest, or abandon specific changes. For instance,
reorganizing an emergency department triage process might result in the more timely

70 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
identification and diagnosis of children with severe acute health conditions. However, if
focus groups with health workers and clients reveal widespread dissatisfaction with the
reorganized triage system, it will be important to explore alternative changes to improve
5
Tracking and
the triage process that are more acceptable to facility health workers and clients alike. analysing QoC
indicators
Qualitative data is particularly important for understanding ‘what matters most and why’
to patients, families, and health workers, shedding light on opportunities to improve
people-centred health care and the experience of patients, families and health workers.
Adaptive management approaches such as ‘after action reviews’ and ‘pause and reflect
sessions’ provide valuable qualitative insights that can strengthen the management and
adaptation of activities in a QI initiative. Box 7 describes the benefits of qualitative data
in QI measurement.

Box 7. Benefits of qualitative data in QI measurement


Exploring the ‘Why?’ behind the data: While quantitative data offers valuable
numerical insights into inputs, processes and outcomes for QI, it often lacks the
context needed to fully understand the results. Qualitative data collected through
methods such as interviews, focus groups or observations can clarify the reasons
behind trends or unexpected results. For instance, if a hospital sees an increase
in neonatal readmissions, qualitative insights from patient interviews or staff
discussions can highlight gaps in discharge, communication, access and follow-up
care processes.

Generating improvement ideas: Qualitative data is a useful tool for developing


hypotheses, which can later be tested with quantitative data. Qualitative data is
particularly helpful in the early phases of a QI initiative, when the problem is not yet
clearly defined. For example, focus groups or interviews with health workers may
reveal communications delays between maternity and surgical units during obstetric
emergencies requiring surgical interventions (e.g. delayed C-section response times).
This hypothesis could then be validated through quantitative indicators like tracking
the time from a decision to operate urgently and the first surgical incision.

Gaining patient insights: Qualitative data offers a deeper understanding of patient


experiences and can help inform the selection of improvement aims and patient-
reported experience of care indicators and qualitative monitoring methods focused on
what matters most to patients. Structured interviews or participatory workshops with
patients, families and/or staff can capture lived experiences and inform interventions
to improve patient-centred care as well as the experience of health workers. For
example, learning how patients perceive their interactions with health care providers
can guide the selection of interventions to improve patients’ experience as well as
monitoring approaches that include qualitative methods (e.g. periodic focus groups
with clients) and quantitative methods (e.g. regular administration of a brief client
questionnaire).

Understanding causes of quality problems in a complex health system: Many health


systems have underlying complexities that cannot be revealed through quantitative
data alone. Qualitative methods can identify how processes in a health system

5. Tracking and analysing quality of care indicators to guide improvement 71


5 operate in practice versus how they were designed to function. Techniques such as
ethnographic observations, shadowing or workflow analysis can map out the actual
Tracking and workings of a system, exposing bottlenecks, or system failures that may not be detected
analysing QoC
indicators via quantitative data. Understanding the root causes of quality problems in complex
heath systems is essential for identifying and testing changes to address these causes.

Assessing the effectiveness of QI initiatives: While quantitative data can show


whether selected QoC indicators have improved after a QI initiative was introduced,
qualitative data is crucial for understanding the reasons behind success or failure.
Qualitative data can also help identify any unintended consequences. By exploring
challenges encountered during implementation of a QI initiative, qualitative insights
can guide necessary adjustments to achieve or sustain improvement.

Enhancing the learning process: Establishing continuous feedback loops through


regular focus groups, staff meetings or patient interviews can help monitor and
strengthen the implementation of a QI initiative. This real-time feedback supports
iterative improvements and allows the QI initiative to adapt to changing conditions.
Moreover, stories derived from qualitative data can engage stakeholders emotionally,
helping to drive motivation and support for further quality improvement efforts.

Table 10. Summary of key actions for national, subnational and facility
levels

National Subnational (regional/district) Facility


• Provide leadership to • Provide leadership to foster a culture • Review existing
foster a culture of data of data for improvement rather than QoC data to inform
for improvement rather for blame improvement
than for blame • Support front-line QI teams to collect, priorities and define
• Prioritize and regularly analyse and use data to improve care facility targets for
monitor selected QoC improvement aims
• Collect and regularly analyse
indicators to improve QoC results (facility-specific and • Regularly plot QoC
care across system aggregated across facilities) to data using run charts
levels manage district-wide QI activities and • Interpret patterns in
• Value and provide provide information to national and QoC data applying run
resources and facility stakeholders chart rules to assess
leadership for QI • Visualize run charts as small the effect of and guide
initiatives including multiples (i.e. view multiple facilities’ iterative changes to
support for data run charts side by side) to guide improve quality of care
systems and management of district-wide QI
measurement initiative activities

5.5 Country example


Several examples in this chapter illustrate the use of a run chart by a QI team to track
progress and analyse the effect of iterative changes to improve care. The following country
example describes a multi-facility QI initiative in two counties in Kenya to improve quality
of client-reported family planning (FP) counselling, and the quality of postpartum family
planning (PPFP) services as part of integrated MNH services. The example highlights many
themes covered in this chapter. QoC indicators in the example below include indicators
of effectiveness of care (PPFP initiated within 48 hours after birth), person-centred FP
counselling as measured by client-reported experience, and equity of counselling by

72 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
calculation of client experience indicators among adolescent clients. Because there is
limited literature on incorporating regular measurement of client-reported experience
of care into FP programs and QI initiatives in resource-limited settings, mixed methods
5
Tracking and
implementation research assessed the feasibility and acceptability of using a FP client analysing QoC
questionnaire in a multi-site QI initiative. indicators

A quality improvement initiative to improve quality of family


planning counselling and postpartum family planning services in
Vihiga and Homabay counties in Kenya
Context

Kenya has a well-established quality model for health and has recently adopted
national health standards for improving quality of care for maternal, newborn and
child health based on WHO standards. In 2021, the Kenya Ministry of Health published
national FP standards. Standard 2 (QI) specifies that facilities should monitor quality
of FP services including client satisfaction and interventions to address gaps and
should conduct and analyse client exit interviews using a standard form. Standard 8
(service delivery and counselling) specifies that clients should receive information,
education and communication to make informed choices on FP methods. Standard 12
(respect and dignity) specifies that clients should receive care that ensures respect,
dignity without discrimination, autonomy, privacy and confidentiality.

QI initiative

Participatory design (late 2022)

In 2022, the MOMENTUM Country and Global Leadership Project in Kenya, funded
by USAID, worked with the Ministry of Health and county and sub-county managers
in Vihiga and Homabay counties to design and implement a QI initiative to support
roll-out of the national FP standards with a focus on QI (standard 2), service
delivery and counselling (standard 8) and respect and dignity (standard 12). Eight
high-volume facilities – with support from county and sub-county managers, the
Midwifery Association of Kenya (MAK) and the project – worked together to apply QI
methods to achieve common improvement aims, regularly measuring and analysing
a common set of QoC indicators and sharing learning across sites. County and sub-
county managers were supported to monitor and analyse results across facilities to
guide oversight and management of activities across sites.

Stakeholders elected to focus on national FP standards 2, 8 and 12 (with the


assumption the approach could be used to achieve other standards in the future),
and selected the improvement aims, QoC indicators, and data sources shown
in Table 11. To achieve standard 2 – stipulating the use of standard client exit
interviews – stakeholders recommended adapting two FP client questionnaires
validated in other settings: the Person-centred Contraceptive Counselling Scale
developed by Dehlendorf et al. (44), and the Method Information Index (45). To ensure
that women in Vihiga and Homabay counties understood the questions, the project
supported cognitive testing of a 10-item questionnaire that combined a subset of
questions from the two questionnaires. Client questionnaires were administered in

5. Tracking and analysing quality of care indicators to guide improvement 73


5 the FP clinic and the maternity unit in the eight facilities. Questionnaire items were
used to calculate client reported QoC indicators. Existing facility maternity registers
Tracking and and monthly reporting forms were used to calculate the PPFP indicator.
analysing QoC
indicators
Table 11. Improvement aims, selected quality of care indicators and
corresponding data sources

Improvement
aims (adapted in
each facility) FP QoC indicators Data sources
Improve quality % of women who receive PPFP method Kenya Health Information
of immediate of choice prior to discharge after delivery system (based on
PPFP summary reporting forms
populated from data in
facility maternity register)
Improve quality Respect, decision support, Client exit questionnaires
of person-centred communication (~40/month per facility)
FP counselling % clients who felt respected Calculated for all clients
and % clients who felt their preferences and clients < 20 years of
Improve quality were taken seriously age
of adolescent- % clients who felt they were given
centred FP enough information
counselling
Counselling content
% clients informed about possible
side-effects
% clients told what to do if they
experience side-effects
% clients told about bleeding changes
% clients informed about other methods
% clients told about switching to
another method

Implementation (2023–2024)

Key activities during the implementation phase included:

• Initial training (in-person): Initial training included basic QI training for facility
QI teams, called Work Improvement Teams (WITs) in Kenya, and coach-level QI
training for county and subcounty managers and MAK representatives to build
their skills for coaching facility WITs and managing activities of the FP QI initiative.
Training materials were based on existing Kenya Quality Model for Health and
Institute for Healthcare Improvement basic QI training materials, which were
adapted to incorporate practical skills-building exercises focused on the selected
FP aims and QoC indicators.
• Follow-up coaching (in-person/virtual): The training was followed by nine rounds
of on-site coaching visits (every 1–2 months) to the eight facilities conducted by
county/sub-county managers and MAK representatives supported by the project.
The coaching supported multi-cadre facility WITs to analyse root causes of quality
problems related to improvement aims, identify and test changes to address

74 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
causes, and monitor patterns in selected QoC indicators using run charts. On-site
coaching was augmented by regular virtual support to QI coaches and facility WITs
5
by the project from January 2023 to September 2024. Emphasis was placed on Tracking and
building the confidence and skills of county and sub-county managers and MAK analysing QoC
indicators
representatives to coach facility WITs.
• Regular monitoring (weekly to monthly): Facility WITs performed regular
calculation, visualization and analysis of the selected QoC indicators with support
from county and sub-county managers, MAK, and project staff to build core skills.
The project developed a user-friendly data entry tool that auto-generated run
charts that could then be annotated with key events and changes being tested. A
baseline median for the PPFP indicator was calculated in each facility based on
six months of retrospective HMIS data. In the absence of pre-intervention data,
start-up ‘baseline’ medians were calculated using the first three months of data
for client-reported indicators (six bi-weekly averages) from the newly introduced
client questionnaires. County and sub-county managers coached WITs to enter
their data to generate run charts and to analyse their results applying run chart
rules (described in this section, above). The county and sub-county coaches were
supported by the project to visualize and analyse aggregate results across the
eight facilities and to use ‘small multiples’ to visualize and analyse results for
individual facilities to guide management of the multi-facility QI initiative (e.g.
facilitating learning exchange visits to higher-performing facilities; increasing
coaching frequency for lower-performing facilities; planning sessions to update
skills or facilitate learning in specific areas).
• Regular learning exchange meetings: Five learning exchange meetings were
convened with county/sub-county managers, MAK representatives and facility
WITs to share results and local solutions, foster friendly competition, and engage
stakeholders in a shared journey of improvement. Representatives from other
facilities and sub-counties joined the 4th and 5th learning exchange meetings to
expose other sub-county and facility managers to the results and learning and to
explore interest in expansion of the QI initiative to new sub-counties and sites.

Selected results

Selected results are shown for two client-reported FP counselling indicators (i.e.
clients who felt respected; clients who reported they were given enough information
to select a FP method) and for the PPFP indicator on initiation of FP by recently
delivered women. Each result includes a brief synthesis of successful changes
implemented by facility WITs to achieve the measured improvements in care.

3. Selecting quality of care indicators to monitor and guide improvement 75


5 Client-reported experience of FP counselling and selected changes implemented
Fig. 22. Proportion of interviewed clients who felt they were
by WITs
given enough information during FP counselling (all ages and
Tracking and
analysing QoC < 20 years)
Figs. 22 and 23 show the aggregate data across the eight facilities during the start-up
indicators and QI periods for the proportion of clients who reported they were given enough
information during FP counselling (Fig. 22) and the proportion of clients who felt
respected during FP counselling (Fig. 23). A total of 5781 clients of all ages completed
the client questionnaire across the eight facilities from January 2023 to April 2024
(2735 clients all-ages [662 clients < 20 years] in Vihiga County and 3046 clients all
ages [793 clients < 20 years] from Homabay County).

Fig. 22. Proportion of interviewed clients who felt they were given
enough information during FP counselling (all ages and < 20 years)

Percentage of interviewed clients who felt they were given enough information
(Gave a "top-box" rating of 5/5)

Start-up period QI period


100%
92
Percentage of interviewed FP clients who felt they were

90 89
90% 86
83 88
79 78 78 88 86
80% 75 82
72 72 73 72 73
69 78
75 73
70%
71
67 66 66
65
given enough information

60% 57
60 61 59
57 Baseline median
50% 45
42 43 (all ages)
40% 44
41
38
30%

20%

10%

0%
Feb-23 (Half 1)

Feb-23 (Half 2)
Jan-23 (Half 2)

Apr-23 (Half 1)

Apr-23 (Half 2)
Mar-23 (Half 1)

Mar-23 (Half 2)

May-23

Oct-23

Dec-23

Feb-24
Jun-23

Jul-23

Aug-23

Sep-23

Nov-23

Jan-24

Mar-24

Apr-24

Time

All ages Clients <20 years old

76 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 23. Proportion of interviewed clients who felt they were respected
during FP counselling (all ages and < 20 years)
5
Tracking and
analysing QoC
Percentage of interviewed clients who felt they were respected indicators
(Gave a "top-box" rating of 5/5)

Start-up period QI period


Percentage interviewed FP clients who felt they were respected

100%
93
90% 86 86 87
83 82 81 83 90
79 86
80% 75 82
73
69 71 77 76
70% 67 73 67 73 74
62
67 65
60% 61
60
51 58
50% 46 Baseline median
45
40 (all ages)
40% 43 40
30% 35
30
28
20%

10%

0%
Feb-23 (Half 1)

Feb-23 (Half 2)
Jan-23 (Half 2)

Apr-23 (Half 1)

Apr-23 (Half 2)
Mar-23 (Half 1)

Mar-23 (Half 2)

May-23

Oct-23

Dec-23

Feb-24
Jun-23

Jul-23

Aug-23

Mar-24

Apr-24
Sep-23

Nov-23

Jan-24

Time

All ages Clients <20 years old

As seen in Figs. 22 and 23, there is a shift of greater than six consecutive data points
above the start-up baseline median in the % clients reporting they received enough
information (Fig. 22) and felt respected (Fig. 23) during the QI intervention period. The
presence of a shift of six consecutive data points above the baseline median during
the QI intervention period indicates a high probability that results are due to changes
introduced during the QI intervention period (see explanation of the run chart shift
rule in section 5.4.2). As seen in Fig. 24, a lower % of adolescent clients than % of all-
age clients reported that they felt respected during the start-up ‘baseline’ period.
Over time, the difference in reported respect between adolescent clients and all-
age clients diminishes. Selected common changes implemented across facilities to
improve quality of FP counselling for all-age and adolescent clients are described
below.

3. Selecting quality of care indicators to monitor and guide improvement 77


5 Fig. 24. Small multiple facility-specific results: Proportion of
interviewed
Fig.clients
during FP interviewed
in facilities
24. Small
counsellingclients
(n=5781
A-H who reported
multiple facility-specific
total clients
in facilities across
A-H who
they
results: felt respected
Proportion
all eight
reported
of
theyfacilities
felt respected
Tracking and from January
during2023 to April 2024)
FP counselling (n=5781 total clients across all eight facilities
analysing QoC from January 2023 to April 2024)
indicators
Baseline period
% who felt respected – all ages
Baseline median

Hospital A Hospital B
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24

Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24
Hospital C Hospital D
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24

Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24
Hospital E Hospital F
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24

Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24

Hospital G Hospital H
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24

Jan-23 (Half 2)

Feb-23 (Half 2)

Mar-23 (Half 2)

Apr-23 (Half 2)

Jun-23

Aug-23

Oct-23

Dec-23

Feb-24

Apr-24

Fig. 24 shows the individual facility results for the percentage of clients who felt
respected during FP counselling during the QI period. As seen in Fig. 24, the pattern
of results over time for % clients who felt respected varies by individual facility,

78 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
with 6 of 8 facilities demonstrating a shift of six consecutive data points above the
start-up baseline median during the QI intervention period. Fig. 24 demonstrates the
5
importance of analysing patterns in facility-specific data (in addition to aggregate Tracking and
data), since aggregate results may obscure important differences across individual analysing QoC
indicators
sites. Analysis of patterns in aggregate and facility-specific data by managers is
important to guide effective management of a multi-facility QI initiative.

Selected changes implemented by facility WITs to improve person-centred FP


counselling in the facility FP clinic and maternity ward

• Regular administration of client questionnaire by facility WITs.


• Regular analysis of client-reported QoC indicators by facility WIT members using
a run chart to understand the effect of the changes they are testing and to identify
additional change ideas to address areas of poor performance.
• Periodic focus group discussions with clients (including adolescents) to understand
clients’ perceptions of respect and desired information during FP counselling.
• Use of a flipchart, anatomical models and fixed tray of FP methods that women
could touch to support person-centred individual and group FP counselling.
• Redesign of the outpatient FP clinic triage process to avoid redundant procedures
at multiple points.
• Recruitment of youth champions to sensitize adolescents on FP services by
initiating biweekly weekend clinics and offering FP services during weekdays at
flexible hours.
• Training youth champions and stationing them at the maternal–child health clinic
to counsel and direct adolescents to respective service points.

Enabling interventions implemented by county and sub-county managers and


MAK representatives supported by project team

• On-site clinical mentorship of nurses organized by facility in-charge, sub-county


reproductive health managers and MAK representatives (mentoring focused on
strengthening health worker confidence and skills to provide person-centered,
evidence-based FP counselling using medical eligibility criteria (MEC) wheels and
balanced strategy plus counselling cards (https://knowledgecommons.popcouncil.
org/departments_sbsr-rh/727/).
• Reinforcement of FP counselling skills using interactive role plays and peer
feedback during learning exchange meetings.
• Orientation of the maternal–child health team on adolescent-friendly FP
counselling approaches, emphasizing respect, confidentiality and non-judgmental
attitudes.
• Introduction of a tally sheet to document FP counselling and selected method,
including dates, numbers counselled, and selected method. Facility WITs reviewed
the tally sheet during weekly meetings.

Initiation of a PPFP method of choice by women who recently delivered

Fig. 25 shows the aggregate results across the eight facilities during the baseline
and QI period for the proportion of recently delivered women who initiated a PPFP
method prior to discharge. The baseline data is based on a retrospective review of

3. Selecting quality of care indicators to monitor and guide improvement 79


2023 to July 2024; data source Kenya Health Information
System)

5 the maternity records. A total of 17 841 deliveries occurred across the eight facilities
during the baseline and QI periods. Selected common changes implemented by the
Tracking and facilities are noted below.
analysing QoC
indicators
Fig. 25. Proportion of recently delivered women who initiated a PPFP
method prior to discharge in the eight facilities

Percentage of women who delivered in facilities and received a family planning


method within 48 hours
Percentage of women who delivered in facilities

Baseline period QI period


100%
90%
and received FP within 48 hours

80%
70% 62 65
60% 53 52 56 56
46 49
50% 41
40% 38 45
30% 27 35
23 25 24
20% 12 21
11
10%
9
6 7 8 11 15 Baseline median
0%
3

4
2

3
2

4
24
2

23

3
23

24
r -2

r -2
-2

-2
l -2

l -2

l -2
-2

-2
y-

y-
n-

n-
p

p
v

v
Ju

Ju

Ju
Ma

Ma
Ma

Ma
Ja

Ja
No

No
Se

Se

Time in months

Selected changes implemented by facility WITs in the maternity ward to improve


initiation of a PPFP method of choice by women who recently delivered (see also
changes above to improve quality of person-centred FP counselling):
% of women who delivered in facilities and received a family planning method within 48 hours
% of women who delivered in facilities and received FP within 48

• Designation
100%
of a ‘PPFP nurse’ to provide counselling and a FP method of choice
prior to discharge for all recently delivered women in the maternity ward.
90%

80%

• Strengthening
70% of PPFP counselling during antenatal care (ANC) visits, with
62%
systematic
65%

documentation
60% of selected FP method in ANC section of mother–child
53% 52%
booklet.
56% 56%
49%
46%
• Confirmation
50%
of FP method selected during ANC (as
41% documented in mother–child
45%
hours

38%
40% 35%
booklet) 30%
during intrapartum and postpartum
25%
care, including support for selection
27%
24%
of a different
20%
method if desired.23% 21%
12% 15%
11% 11%
9%
• Inclusion10%of spouse
6% in PPFP counselling when desired by the woman.
7% 8%

• Use of a privacy screen (or separate


Jan-23Feb-23Mar-23room when available) to ensure privacy during
0%
Jul-22 Aug-22Sep-22 Oct-22 Nov-22Dec-22 Apr-23May-23 Jun-23 Jul-23 Aug-23Sep-23 Oct-23 Nov-23Dec-23 Jan-24Feb-24Mar-24 Apr-24May-24 Jun-24 Jul-24
Time in Months
counselling and administration of a PPFP method of choice.
• Inclusion of a pharmacy technician on facility WIT and standardization of re-order
processes to ensure minimum buffer stock of FP commodities.
• Public display of stock levels in maternity ward and FP clinic with review during
weekly WIT meetings.
• Orientation and mentorship of village health teams (volunteers) by facility midwife
on key FP messages to include in existing regular village counselling sessions
• Regular review and analysis of PPFP QoC indicator results using a run chart and
applying run chart analysis rules during WIT meetings.

80 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
6
Chapter

Assessing and
improving data quality
to strengthen quality
improvement results
and stakeholder trust
6 6. Assessing and improving data quality
Assessing and
improving data
to strengthen quality improvement
quality
results and stakeholder trust

6.1 Key messages


• The data collected and utilized in a QI initiative must be of high
quality to ensure accurate insights for meaningful improvements
and to build trust with QI stakeholders.
• There is a cycle of data use and quality, which can either strengthen
or erode both elements: when the quality of data is poor it is less
trusted and less used, leading to lower demand for data and
investment in strengthening data systems.
• Using data on an ongoing basis provides an opportunity to identify
data quality problems early and develop appropriate remedial
actions.
• Interventions to assess and improve quality of data can be applied
to data already integrated into HIS or to data that is collected during
a time-limited QI initiative (e.g. a QI initiative in a single facility or
district).

6.2 Chapter overview


Chapter 3 describes some approaches to assessing the availability of data in the local
HIS for QoC indicators, required for both long- and short-term monitoring of QI efforts.
Guidelines for assessing and improving the quality of data collected routinely in the
national HIS are available in several technical resources published by WHO (47–49). These
resources include generic guidance and tools on data quality assurance (DQA) and data
quality control (DQC), which can each be used for or adapted to any programme area.
The DQA training package (49) includes an application for use in the DHIS2 for routine
data quality checks and for annual data quality desk reviews. For countries that do not
use DHIS2, Excel and CSPro tools are available together with the training materials.

This chapter does not reproduce the guidance already published by WHO. Rather, it
provides guidance on localized data quality improvement (DQI) approaches, applying
the principles and methods used in QI initiatives to understand causes for poor quality
of data and iteratively test changes to improve data quality based on identified causes.

82 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
6.2.1 Vicious cycle of poor-quality data
Successful QI initiatives generate and use high-quality data to guide iterative improvement.
6
As described in chapter 4, data required for a QI programme can be obtained directly from Assessing and
improving data
various data sources in the local HIS, or QI teams can generate new data based on their quality
information needs for a specific QI programme. However, in many low-income settings,
data collected through the existing HIS may be of poor quality. The lack of quality HIS
data has created pressure for the measurement community to use alternative methods of
generating health-related estimates including mathematical modelling. Such alternative
methods continue to divert already scarce resources that are needed to strengthen HIS
(50–53). Furthermore, when the quality of data is poor, some donors may collect their
own data by establishing parallel data systems, which further fragments country-level
data pipelines. This creates a vicious cycle of poor data quality as illustrated in Fig. 26.
Fig. 26. A vicious cycle of poor data quality in HIS
Fig. 26. A vicious cycle of poor data quality in HIS

Weak data demand and


use often resulting in a
perpetual culture of
non- evidence-based
Limited investment in
programming
Fragmentation of health information
the health system building
information blocks e.g. human
system donors or resource, ICT
partners create infrastructure,
their own data capacity building, etc.
systems

Weak health
Data not trusted information
Sometimes resulting in system capacity
alternative methods to e.g. Limited
generating health capacity to
metrics generate high
quality data

Poor data quality


Many dimensions
of data quality

Source: (46).

In many LMICs, health facility data can be inaccurately recorded or not recorded at all
(54–56). Some of the factors that lead to poor data quality in health facility data collection
and recording tools include: illegible handwriting during data capture or reporting;
unsuitable or unstandardized data formats; lack of a standardized data dictionary in
health facilities; lack of time or motivation to record high-quality data; calculation errors;
insufficient data quality checks at data entry level; non-adherence to data collection

6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 83
6 or recording guidelines and data definitions; and a poor culture of data use which
disincentivises health facility-based teams to collect and report high-quality data (57).
In QI programs, these challenges can emerge when new QoC indicators are integrated into
Assessing and
the HMIS for routine measurement and monitoring. If health care workers perceive these
improving data
quality new indicators as an added burden, especially when their value is not yet evident, data
recording and indicator calculation may initially suffer in quality. Likewise, established
indicators can be affected by ingrained practices, such as habitual inaccuracies in data
entry, which can also undermine data quality. Efforts to improve the quality of data must
therefore address the underlying factors that contribute to poor quality of data.

6.2.2 Mechanisms for improving data quality


Assessing and improving the quality of data in the context of a QI initiative will typically
involve three interrelated mechanisms namely DQA, DQC and DQI. These concepts are
explained below.

Data quality assurance


DQA is preventive in nature and involves the use of tools and procedures aimed at
minimizing or even preventing errors during data recording, extraction, collation,
reporting, analysis and dissemination. If done well, and given its preventive focus, DQA
can be a cost-effective intervention to improve the quality of data collected as part of a QI
initiative. Proactive prevention of errors means that less time and money will be required
to correct errors retrospectively.

Data quality control


Conversely, DQC involves the use of data verification and review techniques and tools
on an ongoing basis, with a view to ascertaining the extent to which pre-determined
data quality requirements are met. DQC is therefore an important component of data
quality improvement in that data quality checks are routinely conducted, process failures
identified, and corrective efforts put in place to improve the quality of the data at any
level of the HIS hierarchy.

Data quality improvement


DQI is an approach that applies key QI principles to improve the quality of data based on:
a) an understanding of leading data quality problems in a local setting (e.g. information
generated through DQC and other assessments of data quality ; b) a structured analysis
of the root causes for poor quality of data in the local setting using QI analytic tools such
as fishbone diagrams, ‘5 whys’, process mapping; and c) testing of local changes that
address root causes of poor-quality data and monitoring a few selected indicators of
data quality (e.g. data completeness, accuracy, timeliness) to determine if the quality of
data is improving.

84 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
6.3 Practical guidance
6
6.3.1 Localizing the assessment and improvement of data quality Assessing and
using QI methods improving data
quality
Implementing QI methods to enhance the quality of data collected and reported by health
facilities for improving MNCH care can significantly complement existing government
mechanisms, such as DQA and periodic data quality reviews. The importance of this
hybrid approach is highlighted below:

• DQA mechanisms establish guidelines and standards for data quality at a national or
subnational level, ensuring consistency and compliance across the health care system.
QI methods can provide a systematic, step-by-step process for health care facilities
to implement and operationalize these national standards. By embedding QI cycles
like the Plan-Do-Study-Act, organizations can continuously improve how they meet
national data quality standards in a practical, timely and localized manner.
• Periodic data quality reviews highlight gaps, inconsistencies and areas where data
quality falls short of national standards. These reviews provide important insights
but are often more retrospective and evaluative. QI methods are action-oriented
and can help QI teams respond to the gaps identified by these reviews. For example,
after a review identifies issues with timeliness or completeness of data for newly
introduced QoC indicators, QI approaches can help health facilities and administrators
at subnational level systematically design, test and implement interventions to address
these specific gaps.
• Data quality reviews are also often conducted at specific intervals (e.g. annually, or
semi-annually), focusing on snapshots of data quality over time. QI methods can
complement such processes by fostering continuous improvement in data quality
through integrating ongoing monitoring and real-time feedback loops. This ensures
that improvements are sustained between periodic reviews and that health facilities are
continuously improving their data collection, management and reporting processes,
not just responding to the review outcomes.
• If not implemented properly, national and subnational DQA mechanisms may also
appear to promote a top-down approach to ensuring data quality, due to their
focus on compliance and adherence to protocols. When QI methods are embedded
in such mechanisms, this can promote a culture of data quality within health care
organizations. By involving staff at all levels in continuous improvement efforts, QI
creates a bottom-up culture where data quality is seen as an ongoing priority, not just
something to be reviewed periodically.
• National and subnational DQA mechanisms also typically involve auditing and
validating data reported through national or subnational HIS, focusing on various
data quality attributes for the data that feeds into aggregated reports. Using QI
methods to improve data quality can help ensure that the data feeding into national
and subnational systems is accurate at the point of care.

6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 85
6 6.3.2 Key actions at health facility and
subnational level Illustrative
Assessing and Health facilities are the primary source of health care actors
improving data
quality data, where patient information is recorded, and services
are delivered. Ensuring data accuracy at this level is Health information
foundational for generating reliable reports at subnational, officers, in consultation
with QI team members
national and global levels. Quality individual patient data
and subnational HIS
at the facility level also directly influences the quality of and M&E managers as
patient care because accurate individual patient data is appropriate.
essential for clinical care decision-making (e.g. making
an accurate diagnosis; selecting and accurate dosing an
appropriate medication; accurately monitoring a patient’s clinical status; providing
appropriate follow up care). By improving data quality at the source, the entire data
reporting and utilization chain benefits from cleaner, more reliable data. This reduces
the need for extensive data cleaning or validation at subnational or national levels,
streamlining reporting processes and improving overall system efficiency. However,
subnational actors also have a role to play in ensuring the quality of data submitted from
health facilities. The key actions at both levels are reviewed in this section of the chapter.

a. Identify and define the data quality problem


Identifying the specific data quality issue and its underlying causes is a crucial first step in
crafting a strategy to improve data quality. While common data quality problems and their
causes are generally understood, they can vary across health facilities at any given time.
Clearly defining the local data quality issue and its causes enables targeted interventions
and provides a solid foundation for a successful data QI initiative.

• Start by collecting information that clearly defines and demonstrates the existence and
scale of data quality problems. Both qualitative and quantitative data are valuable at
this stage.
– For instance, in a multi-site QI initiative overseen at the subnational level, the
manager might observe unusual patterns in the completeness and timeliness of
data submitted by health facilities for newly introduced QoC indicators prioritized
for long-term monitoring of integrated management of childhood illnesses (IMCI).
To validate these observations, the manager could commission a data quality
audit of historical data. This audit would help quantify the percentage of missing
fields, incorrect entries and delays in data submission. Establishing this baseline
provides measurable evidence of the problem and offers a concrete starting point
for monitoring progress toward improvement.
– Once the audit has been completed, the manager may also convene the
subnational and/or the concerned health facility QI teams during the regular
QI meeting, to discuss where the problem is occurring, who is involved or
affected by the problem, when it occurs, and the consequences of the having
this problem in this QI initiative. From here, a clear and concise problem
statement that captures the issue in specific and measurable terms can
be prepared. In this case, a problem statement can be written as follows:
Health facilities X and Y have been reporting incomplete and inaccurate data for
the IMCI QoC indicators, with over 30% of fields left blank or filled incorrectly in

86 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
routine reports over the past three months. This has delayed timely decision-making
and resource allocation, undermining efforts to improve IMCI-related processes and
outcomes.
6
Assessing and
• When multiple data quality problems are identified it may be necessary to prioritize improving data
problems based on their urgency, the feasibility of improvement, and their impact quality
on the QI initiative. Prioritization ensures that the team focuses on the most critical
issues first.

b. Analyse the root causes of the data quality problem(s)


Identifying the root causes of the data quality problem enables QI teams to understand
why data quality problems occur and to develop targeted solutions that address the
underlying issues, rather than just the symptoms. A thorough root cause analysis will
help prevent recurrence of the issues and ensure that any improvements are sustainable.

• Consider creating a process map or flowchart that illustrates how data are collected,
entered, processed and reported. Understanding the flow of data helps identify
potential points where errors could occur. Mapping patient flow and data flow at
service points is also useful in visualizing where problems arise. Chapter 5 explains
and gives an example of how data flow mapping can be conducted in health facility
for any technical area.
• Consider applying structured problem-solving tools to uncover the root causes of
data quality issues. There is a host of such tools used in mainstream QI which can be
adapted to a data QI improvement initiative. However, this chapter does not go into
details to explain what each of these methods entails but provides some illustrative
examples of their application in data QI.
– Fishbone diagram (also known as an Ishikawa diagram): This is a visual tool used to
categorize potential causes of a problem into different types (e.g. people, process,
technology, environment and data). Each category helps the team brainstorm
possible sources of a given issue (58). Using the IMCI example, assuming the
identified problem is inaccurate data, the ‘People’ category could include issues
like insufficient training of staff, while the ‘Process’ category might include lack of
standardized data entry procedures.
– 5 Whys: This is a simple but effective method of asking ‘why?’ repeatedly (usually
five times) to drill down into the root cause of the problem. This technique helps
reveal deeper, systemic causes that might not be immediately obvious (59). Using
the IMCI example, assuming the identified problem is data entry errors, the 5 Whys
could be developed as follows:
i. Why are there frequent data entry errors for the newly introduced IMCI QoC
indicators?
Because the staff entering the data often makes mistakes while inputting
patient information.
ii. Why do staff make mistakes while inputting patient information?
Because they are rushed due to the high workload and insufficient time to
enter data carefully.
iii. Why is the workload so high for the staff responsible for data entry?
Because there aren’t enough staff members assigned to handle both patient
care and data entry tasks.

6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 87
6 iv. Why aren’t there enough staff members to manage both patient care and
data entry?
Because the facility has not prioritized hiring additional staff or reassigning
Assessing and roles to specifically address data management needs.
improving data
quality v. Why hasn’t the facility prioritized hiring or reallocating staff for data
management?
Because there is a lack of awareness of the importance of accurate data and
no budget allocated for expanding the team dedicated to data quality.
– Pareto analysis: This is a technique used to identify the most significant causes of
a problem, based on the principle that a small number of causes often contribute
most to the problem (the 80/20 rule). By prioritizing the most common causes,
the team can focus their efforts effectively (60). For example, a Pareto analysis of
data quality issues related to the IMCI QoC indicators might reveal that 80% of
data inaccuracies are caused by just 20% of health workers who were not trained
in the new data collection system.
• Consider engaging the right stakeholders on the QI teams to gather credible insights
that may not be evident during data review. Such insights can be gathered during the
same QI meetings mentioned in the previous action on problem identification and
definition.
• After identifying the root causes, prioritize them based on their impact on the data
quality issue and the feasibility of addressing them. This ensures that the team focuses
its efforts on the most significant causes that are within their control to change. For
example, if training is identified as a major root cause, and it is feasible to implement
a structured training programme, this issue may be prioritized over technical issues
that require extensive resource investments.
• Depending on resource availability, it may be helpful to confirm the identified root
causes by reviewing data, engaging stakeholders and testing assumptions. For example,
the team may choose to validate that lack of training is a root cause of inaccurate data
recording by reviewing staff training records or observing data entry practices in real
time.

c. Develop and test interventions


After identifying the root causes of data quality problem, the next step is to develop
and test localized interventions. This step involves generating potential solutions (or
change ideas), referred to as ‘interventions’ in this chapter, to address the identified
problems and then testing these ideas on a small scale before adopting the intervention
permanently and extending to other facility units and facilities. It is important that
adopted interventions are effective, feasible and sustainable.

• Once the root causes have been identified and prioritized, the QI teams can engage in
brainstorming sessions to generate ideas for change. All team members, regardless of
their role, should be encouraged to contribute suggestions for solving the identified
problems. When trying to find a potential solution, it is important to:
– Ensure that change ideas directly address the root causes uncovered during the
analysis phase. For example, the solutions should target the systemic, human,
technical or process factors that are contributing to the problem, whichever is or
are the root cause(s).

88 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
– Leverage all members of a multidisciplinary QI team. Clinical staff might suggest
workflow changes, data clerks might offer insights into improving data entry tools,
and IT staff might propose technological upgrades. Finding a balance between
6
Assessing and
several and sometimes conflicting perspectives may be tough but is important, improving data
nevertheless. quality
• After generating a list of potential change ideas, the QI team should evaluate and
prioritize them based on several criteria:
– Can the change idea be realistically implemented with the available resources,
staff, and infrastructure?
– Will the change idea significantly improve the identified problem? It is important
to prioritize interventions that address the most critical root causes and have the
potential for high impact. For example, if the QI team is considering both training
workshops for existing staff and hiring additional staff, they might prioritize
training workshops if they are more immediately feasible and expected to have a
more direct impact on data quality.
– Can the intervention be maintained over time? Consider whether the change can
be easily integrated into routine practices or workflows.
– Is there existing evidence or previous experience suggesting that the intervention
will likely lead to improvement?
• For each selected intervention, the QI team should create an action plan, including:
– The specific improvement to be achieved by implementing the selected
intervention(s) (e.g. to reduce data entry errors by 20% within three months).
– The steps required to implement the intervention, including who is responsible
for each step and what resources will be needed. An implementation plan may,
for example, include developing a training curriculum on data entry standards for
the indicators of interest, scheduling and conducting the training session for all
data clerks and health workers, and providing data entry jobs such as checklists
or guides at each service point.
– A timeline for implementing the intervention, including key milestones and
deadlines.
– Indicators to assess whether the intervention is leading to improvement. These
indicators should be specific, measurable, achievable, relevant and time-bound
(SMART).
• Consider carrying out small-scale testing (i.e. a pilot) of the intervention depending on
the geographical scope. If the intervention is to be implemented across several health
facilities, it would be important to test it on a small scale before full implementation
to see how it works in practice. Testing allows the QI team to refine the idea, address
any unforeseen challenges, and gather data on its effectiveness before scaling up. It
is also important to start small by testing the intervention in a single service point or
group of staff, which would allow teams to manage issues that arise during the testing
phase more effectively. For example, if the change idea involves introducing a new
data entry process, the QI team can pilot the process in one unit of the health facility
to gather feedback and assess impact before rolling it out more widely.
• It important to define timebound indicators to monitor the effect of the priority
interventions on the data quality problem. The performance indicators might be
quantitative (e.g. % of records with data errors, timeliness of data reporting) or

6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 89
6 qualitative (e.g. staff satisfaction, ease of use). During the testing phase, the QI team
should closely monitor the implementation and collect data to track progress. This
involves gathering feedback from staff involved in the testing and monitoring the
Assessing and
selected indicators regularly. For example, after training staff in a new data entry
improving data
quality protocol, the QI team in participating health facilities might review a sample of data
records every week to see if errors are decreasing. Additionally, they could conduct brief
enquiries (e.g. interviews) with data clerks or health workers to gather perspectives
on whether the new process is easier or more difficult to use.
• During the planning phase, it is important to anticipate potential challenges or
resistance that may arise during the testing phase. For example, the staff may initially
resist changes due to unfamiliarity or perceived increased workload, or technical issues
with data entry tools may arise. It is therefore important to create contingency plans
to address these challenges proactively, for example by developing a plan for ongoing
support, such as regular onsite support conducted by subnational QI teams.

d. Implement effective interventions at scale


This step entails implementing the interventions that have been piloted and found to be
successful at small scale and gathering additional context-specific data to evaluate their
performance in the new setting.

• Before fully implementing the intervention at scale, ensure that all necessary resources,
staff and materials are in place. Preparation may include: communicating the upcoming
change to all relevant stakeholders at national and subnational levels to ensure that
they are aware of the purpose, process and their roles in the implementation; providing
training or orientation to individuals who will be directly involved in implementing the
intervention; and ensuring that any tools, templates or technological systems needed
for the intervention are ready and accessible.
• Closely monitor the process as the interventions is being implemented in new sites
as this is important for the identification of any unexpected issues, bottlenecks or
challenges that may arise. Monitoring also involves gathering feedback from those
involved in the implementation. Data can be collected for the same indicators used
during the pilot phase to evaluate the effectiveness of the change across the settings
of interest.
• Throughout the implementation phase, it is important to engage with staff and other
stakeholders to gather their input and feedback. This can be done at the same time
the regular QI meetings are taking place, where both data quality issues and the wider
QI related issues can be discussed simultaneously. This helps identify areas where
the changes are working well and where adjustments may be needed. Staff buy-in is
critical to the success of the change, so fostering open communication is essential.
• If any issues arise during the initial implementation of an intervention, the QI team
should be prepared to troubleshoot problems and make minor adjustments to the
process as needed. However, major changes to the plan should be avoided unless
necessary, as they could undermine the validity of the change ideas. For instance, if
data clerks and health care workers are struggling to adapt to a newly adapted health
facility register or case note, the QI team might provide additional training or technical
support. If a specific workflow is causing delays, the team could adjust the sequence
of steps to improve efficiency.

90 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
• Consider maintaining detailed accounts of what was implemented, how it was done
and any deviations from the original plan. Documentation is crucial for analysing the
results during improvement and for replicating the change if it proves successful.
6
Assessing and
improving data
e. Evaluate progress and opportunities for further implementation quality
It is important for QI teams to evaluate the effectiveness of the implemented interventions
to determine whether they led to the desired improvements. This phase involves a careful
and systematic analysis of the data collected during the large-scale implementation
phase, comparing it to baseline data and the objectives set during the planning phase.

• For quantitative data, consider applying the same methods described (in Chapter 4) on
how to analyse and visualize data collected routinely in health facilities. The principles
remain the same although other data analysis techniques can be used to assess the
significance of change. The QI team might be interested to see if the performance
steadily improved over time or if it fluctuated, or whether there are specific days or
times when errors spiked or dropped. This kind of information can feed into the next
phase of improvement planning. They may also segment the data based on the roles
of the data capture team, service point, etc.
• For qualitative data it might be useful to review feedback from staff and stakeholders
to assess whether the change is perceived as effective and sustainable. For example,
the baseline data showed a 20% error rate in data entry before the intervention. After
implementing a new training programme, the error rate dropped to 10%. This meets
the QI team’s objective of reducing errors by at least 50%, but may not be sustainable if
the same resources needed to bring about this change are not maintained. It may also
be important to consider if there have been unintended consequences of the change,
such as increased workload, reduced efficiencies in the routine care of patients.
• Once the data has been analysed, consider holding a meeting with the relevant QI
stakeholders to discuss the results and their implications. It is important to encourage
an open discussion on what worked, what didn’t, and what could be improved. This
meeting may also be a conducive platform to make decisions as to whether the
intervention should be adopted on the basis that it led to significant improvements
with minimal unintended effects across different settings, or be adapted if it had missed
results. If the intervention did not lead to improvement or created unintended negative
results across settings, it may be necessary to abandon it and explore alternative
solutions.
• If the QI team decides to adopt the intervention to improve data quality for long-term
implementation, the next step is to standardize the process to ensure that the new
practice becomes the norm and is consistently followed across all concerned health
facilities. This may include detailed documentation of protocols, workflows, roles and
responsibilities to serve as a reference for all staff to ensure consistent implementation
across sites. In some instances, it may only be necessary to revise existing guidelines,
protocols, forms or tools to reflect the standardized change.
• As the goal is to ensure that the interventions implemented can be maintained over
time and continue to produce positive results without requiring ongoing external
support or intervention, it is important to ensure that the standardized change are
embedded into job descriptions, performance evaluations, as well as local policies.

6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 91
6 Table 12. Summary of key data actions for national, subnational and
facility levels

Assessing and National Subnational Facility


improving data
quality • Ensure there are • Include standing agenda item for QI teams • Consider
enough resources at subnational QI meetings to discuss data appointing a data
at subnational and quality issues and their implications for quality focal person
facility levels to the QI initiative(s) on the facility QI
support DQI activities • Incorporate data quality into supervision team
• Provide ongoing mechanisms (e.g. supportive supervision) • Establish strong
capacity building and • Ensure that all health facilities have a list DQA, DQC and DQI
training around DQI of operational definitions of what is being mechanisms in QI
• Create learning measured as part of the QI programme, monitoring efforts
platform/systems for and how to collect and report the data • Ensure that all data
peer learning among • Support health facilities to ensure that source documents
health information the responsibility for data collection and (e.g. medical
officers/data clerks reporting at each service point is clearly records, registers)
on DQI efforts assigned to the relevant staff (i.e. should are available for
be in their job description) auditing purposes
• Ensure that all health facilities use • Incorporate health
standardized or compatible data collection care improvement
forms (e.g. medical records, registers) that actions by QI teams
are adapted to the QI initiative and DQI efforts
into management
• Ensure that data received from lower
meeting agenda
levels (district/health facility) are
systematically verified for data quality
• Regularly implement procedures to
reconcile discrepancies in the reported
data

6.4 Country example


This case study describes how a DQI programme was built into the Project Fives Alive!
(PFA!) which was implemented in Ghana by the National Catholic Health Service, the
Ghana Health Service and the Institute for Healthcare Improvement between 2008 and
2015.

Project Fives Alive! in Ghana


The mission of the Project Fives Alive! (PFA!) was to reduce preventable deaths among
children under five by embedding QI approaches into health care across the nation.
With over 80% of the public sector hospitals involved and a target of covering 30%
of the districts, the project set out to create lasting change in health care delivery for
some of the most vulnerable populations.

The path to achieving this monumental goal was not easy. For PFA! to succeed,
reliable and high-quality data was essential. Data-informed decisions helped track
progress, highlighted gaps in health care delivery, and guided new actions to improve
service delivery. But like many developing health care systems, the country struggled
with data quality – completeness, accuracy, timeliness, and reliability of the data
reported in the district health management information system (DHIMS).

92 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Realizing this, PFA! launched a bold initiative to improve data quality – one that was
built on thorough planning and a deep understanding of the local context. This DQI
6
programme aimed to strengthen the data collection process, root out inaccuracies, Assessing and
and ensure that every bit of information that made it into the system could be trusted. improving data
quality
At the heart of this effort were the district health information officers and hospital
data officers. The PFA! team began by empowering them, providing intensive training
on quality planning tools to help diagnose the root causes of poor data. For many,
this was their first opportunity to develop skills in understanding and resolving data
quality issues systematically.

One of the major challenges lay in discrepancies between the raw data collected in
health facilities and the final figures entered into DHIMS. This mismatch meant that
critical information could be lost or distorted, affecting everything from patient care
decisions to the allocation of resources. In response, PFA! implemented a robust
auditing process where the data officers meticulously compared source data with
what was ultimately reported, identifying and addressing discrepancies to restore
trust in the system.

As data began to improve, PFA! introduced a system to track the progress of key data
metrics: completeness, accuracy and timeliness. Through annotated run charts, they
could visually represent the progress made, showing health care workers at all levels
how their efforts were closing the gaps and improving the quality of both the data
and the care.

PFA!’s work didn’t happen in isolation. In some geographic areas, it overlapped with
other important health care initiatives. USAID funded a project focused on improving
malaria care, while PATH was working on improving maternal and newborn outcomes.
Rather than operate in silos, the Policy, Planning, Monitoring and Evaluation Division
of the Ghana Health Service saw an opportunity to bring these projects together,
harmonizing their efforts to improve data quality at the national level. The result
was a unified approach to data QI that became the gold standard across multiple
health care domains.

The joint approach to DQI led to the development of a comprehensive national


protocol. Fourteen priority indicators in MNCH were selected, and a Data Learning
Network was created to foster collaboration among district health information
officers. Through in-person meetings, coaching and mentoring, officers across the
country exchanged ideas and developed action plans to tackle data challenges in
their own facilities. National and regional data validation teams visited districts to
guide and validate efforts, ensuring that data was complete, accurate and timely.

At the district level, officers took proactive measures to improve data quality. They
sent letters to health facilities clarifying data flow processes, highlighted key data
issues, and reinforced reporting timelines. Facility managers were engaged in action
plans, and monthly performance updates were shared with the District Director of
Health Services, creating a culture of accountability and continuous improvement.
Ranking facilities according to their data quality performance added a competitive
edge, motivating teams to strive for better outcomes.

6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 93
6 Data audit teams were formed, and regular monthly meetings became the norm.
Validation of summary reports became a monthly exercise, ensuring that data at
Assessing and every level of the system was scrutinized and improved upon.
improving data
quality But perhaps the most innovative change PFA! introduced was the concept of multiple
feedback loops. In the old system, data flowed only in one direction – upwards from
the health facilities to district, regional and national levels – with little to no feedback
provided to the lower levels of the health system. This unidirectional flow created a
disconnect, where health care workers reported data without knowing how it was
being used or whether it was even reviewed. PFA changed that dynamic, ensuring
that data quality feedback was provided at every level, and that lower levels had
visibility into how their data was being utilized to inform decisions.

Management teams got involved, reviewing dashboards that showcased data quality
performance. A space was created in management meetings for data officers to
present their findings, giving them a voice and recognizing their role in driving QI
initiatives.

Learning sessions became a platform for sharing best practices across facilities.
Teams gathered for periodic review meetings, where they presented their successes
and challenges in improving data quality. In a marketplace format, facilities taught
one another, cross-pollinating ideas that spread innovation across the country.

Coaching and mentoring were key to sustaining these efforts. Improvement coaches,
trained in QI and data quality, visited districts to assess the impact of changes and
to help plan future improvement cycles. Their hands-on guidance helped facilities
stay on track and continually push for better results.

By the end of the project, PFA! had fundamentally influenced, and in some respects
helped to change, how data was collected, reported and used across the country’s
health care system. Data quality improved dramatically, which in turn strengthened
the health care system’s ability to track progress, allocate resources effectively, and
ultimately, reduce preventable deaths among children under five years of age. The
ripple effect of the DQI programme reached far beyond PFA!’s immediate goals and
geographic areas, setting a standard for data-driven health care improvement across
the nation.

94 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
7
Chapter

Strengthening
quality improvement
measurement capability
of key actors
7 7. Strengthening quality
Strengthening
quality
improvement measurement
improvement
measurement
capability
capability of key actors

7.1 Key messages


■ Many managers and health workers have limited skills and
experience with selecting QoC indicators and measuring and
interpreting patterns in QoC indicators over time.
■ Capability-strengthening approaches should be practical and
competency-based with regular opportunities to reinforce skills.
■ Opportunities for strengthening QI measurement capability
include pre-service education, in-service training, mentoring,
supervision, and professional development platforms.
■ Different actors need distinct QI measurement knowledge and
skills based on their roles supporting QI measurement in a health
system or organization.
■ Strengthening QI competencies (including measurement
competencies) at scale requires enabling policy, leadership,
technical resources, committed funding, skilled human resources
and coordination between health and education ministries.

7.2 Chapter overview


This chapter outlines the QI measurement knowledge and skills needed by different actors
and reviews approaches for building these skills. The chapter also reviews the system
inputs that are required to build QI measurement competencies including enabling policy,
leadership, financing and skilled human resources.

In line with WHO implementation guidance for improving quality of facility based MNCH
services (2) that emphasizes the subnational health system (e.g. region, district) as a
primary unit for implementing and monitoring large-scale QI initiatives, this chapter
focuses in particular on the skills needed by subnational managers and front-line QI
teams. Many of the principles, however, are applicable to building the QI skills of managers
and health workers in any large health care organization (e.g. large public or private sector
hospital).

96 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
7.3 Key terms and concepts
7.3.1 Importance of building QI measurement capabilities and
7
Strengthening
selected challenges quality
improvement
The successful design and implementation of large-scale QI initiatives to improve care measurement
and health outcomes for women, newborns and children requires, among other things, capability
that managers and health care workers possess the knowledge, skills, experience and
tools necessary to improve and measure the quality of the care they provide and manage.
Multiple actors in a health system or organization (public or private) need QI measurement
knowledge and skills to support the design, management and improvement of MNCAH
services. These actors include members of QI teams, facility managers, subnational
MNCAH and QoC programme managers and health information officers as well as
national-level policy-makers, programme managers and health information officers.
Individual actors need different skills based on their specific role supporting the planning,
implementation and monitoring of quality improvement initiatives at various system
levels. Since not everyone in a health system or organization needs the same depth of
knowledge and skills, capability-strengthening strategies should be tailored to the needs
of individual actors.

Many frontline health workers may have little to no familiarity with QoC indicators
and even less familiarity with how to measure QoC indicators over time to help guide
improvement. There may be limited opportunities for managers and health workers to
develop QI skills to implement key steps in a QI initiative, including selecting improvement
aims and QoC indicators, analysing root causes for poor QoC, iteratively testing changes
to close gaps, and regularly monitoring patterns in QoC indicators over time to track
progress and guide improvement (see Fig. 1). In many settings, health worker pre-service
education and in-service training do not include QI skills and health workers have limited
opportunities to learn skills on the job. Often, the leadership, financing, skilled expert
trainers and skills-building materials necessary for training and mentoring are lacking.

a. Different actors need different QI measurement knowledge and skills


Table 12 outlines the common QI measurement knowledge and skills needed by health
workers and managers at service delivery, subnational and national level. The table is
not prescriptive, and some elements may not be applicable in all settings. Users of this
guide are encouraged to adapt the table to define the QI measurement competencies
needed by specific actors and health worker cadres in their setting, recognizing that all
actors do not need the same depth of QoC measurement knowledge and skills to support
improvement.

Capability-strengthening strategies should target the specific QI measurement skills and


knowledge needed by distinct actors. For example, members of multidisciplinary QI teams
need to be able to extract data and calculate agreed QoC indicators, plot indicator results
in a run chart, and apply simple rules to analyse patterns in results over time as they make
iterative changes to improve care (see chapter 5). QoC programme managers, on the
other hand, need the skills of QI team members as well as a deeper set of QI management
skills related to the planning and management of facility, district and region-wide QI
initiatives including the selection of improvement aims and quality of care indicators
and the supervision of QI teams. To fulfil these functions, managers in an organization

7. Strengthening quality improvement measurement capability of key actors 97


7 or health system need to ensure the necessary combination of measurement, subject
matter and QI expertise to: 1) define improvement aims and guide development of QoC
indicators and measurement methods in a QI initiative; 2) regularly support and build
Strengthening capability of QI teams to plot and analyse their results; and 3) monitor site-specific and
quality
improvement aggregated results across sites to guide oversight and management of activities in a
measurement multi-site QI initiative (e.g. spreading learning from high-performing teams; providing
capability extra support to lower-performing QI teams).

Table 12. QI measurement knowledge and skills needed by different health


worker cadres and actors at distinct system levels

Key actors Required QI measurement knowledge and skills


Facility managers and • Understand data sources in facility HMIS
QI team members • Understand QoC indicator definitions and data collection
methods
• Collect data and calculate selected QoC indicators
• Regularly plot QoC indicators in a run chart and annotate with key
activities and changes being tested (see 5.4.1)
• Analyse run chart results applying probability rules to interpret
the effectiveness of changes being introduced by QI teams
(see 5.4.2)
• With support from an expert (within or outside the health care
facility), select QoC indicators based on selected improvement
aims (see section 3.4).
Subnational (e.g. All skills above for QI teams and facility managers, plus:
MNCAH, QoC, health
• Select a small set of MNCAH QoC indicators to monitor
information managers)
periodically for QI and QC purposes
• Regularly monitor, visualize, interpret and communicate results of
subnational QoC indicators (e.g. using a dashboard)
• Analyse site-specific and multi-site aggregated QoC indicator
results and use results to manage multi-site QI initiatives (analyse
small multiple data visualizations) (see sections 5.4.3 and 5.5)
• Coach/supervise QI teams to retain and deepen QI measurement
skills
• Understand importance of and possess skills to disaggregate QoC
results by equity stratifiers (see section 5.4.3)
National • Define QI measurement knowledge and skills needed by health
worker cadres and managers across system levels
(e.g. policy-makers,
programme managers) • Support development of national strategies to strengthen QI skills
as part of pre-service education, in-service training, supervision
and professional development platforms
• Lead development of pre-service education and training
resources
• Interpret annotated run charts
• Understand importance of and possess skills to disaggregate QoC
indicator results by equity stratifiers.

98 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
7.4 Practical guidance
Illustrative
7
7.4.1 Develop a capability-strengthening actors Strengthening
strategy quality
improvement
National and subnational managers can adapt Table 12 National and subnational measurement
managers capability
to define the core QI measurement knowledge and skills
needed by specific health worker cadres and managers
in their context. Because measurement is an integral
component of a broader process of improving care, measurement-specific skills can and
should be incorporated into QI capability-strengthening strategies and curricula. Once
required QI skills (inclusive of measurement) have been defined for specific actors and
health worker cadres, national and subnational managers should develop a strategy and
costed operational plan at appropriate system levels (e.g. district) to build these skills
applying adult learning best practices. Important platforms for building core QI skills
include pre-service education, in-service training, mentoring, supportive supervision
and professional development platforms among others.

a. Strengthen pre-service education


Almost all cadres of health workers will benefit from learning basic QI measurement skills
during pre-service education, including calculation, plotting and interpretation of QoC
indicators over time. However, in practice, pre-service education for health workers often
does not include QI skills and students have limited opportunities to learn QI skills during
practicums. Because pre-service education programmes for health workers are led or
regulated by the ministry of education in many countries, it is important that ministries of
education and health (including MNCAH and QoC departments) collaborate to ensure that
pre-service education programmes target QI skills based on future competency needs. It
is also important for national policy-makers and programme managers to invest in the
development of competency-based pre-service QI curricula, drawing on existing global
and local resources and applicable resources in other countries (44,61–66).

b. Bolster in-service training


In the absence of strong pre-service education, in-service training is an important
platform for building QI measurement skills of health workers. Training approaches
should be based on adult learning best practices emphasizing on-site skills-based training
methods reinforced via follow-up coaching or mentoring. Training all members of a QI
team together, preferably in their place of work, strengthens the cohesiveness of a QI
team as well as strengthening the skills and confidence of individual team members.
Similarly, training members of a regional or district health management team who
support regional- or district-wide QI initiatives (e.g. health information officer, MNCH
programme manager, QoC officer) helps build and reinforce skills of individual members
as well as the cohesiveness and effectiveness of the subnational management team.
Team-based on-site training is generally more effective than off-site didactic training in
a classroom.

In some cases, a QI team may designate a data officer team member to lead data
collection, calculation and plotting of QoC indicators in a run chart. In other cases, QI
teams may work together to support these steps or may rotate responsibility for this

7. Strengthening quality improvement measurement capability of key actors 99


7 function. Regardless of who leads specific steps, all managers and QI team members
should have a basic understanding of data sources, calculation of QoC indicators, and
plotting and interpretation of indicators in a run chart. This understanding is fundamental
Strengthening for assessing the effect of the changes the QI team will be testing and monitoring as a
quality
improvement team. On-site training of QI teams can incorporate simple team-based exercises such as
measurement plotting and interpreting QoC indicator results on a paper run chart (e.g. flip chart), using
capability their own HMIS data or simulated data (e.g. using data from a facility’s antenatal care
register to calculate and plot selected QoC indicators in a paper run chart). Using paper
run charts for initial training can help demystify the process of plotting and analysing QoC
indicator results in a run chart, especially for team-members who may not have access
to a computer or may not have strong computer skills.

When appropriate, QI training (inclusive of measurement) can be incorporated into


existing technical training (e.g. clinical training) incorporating practical exercises
relevant to the technical area. This helps trainees understand the value of QI and QoC
measurement for improving quality of care in priority technical areas. Several global
MNCH clinical training resources incorporate QI exercises (e.g. identifying and defining
a plan to address common causes for poor quality of MNCAH care in a specific clinical
area as part of an MNCAH clinical training curriculum).

It is important that training and mentoring for managers and supervisors (including
supervisors of supervisors such as regional ministry of health managers) build the skills
needed by managers to provide effective leadership of the planning, financing and
oversight of QI initiatives including QI capability-strengthening activities. In addition to
the skills needed by QI teams, managers and leaders need additional skills for supporting
the design and implementation of subnational QI initiatives, including selection of
improvement aims and meaningful QoC indicators, supporting QI teams and monitoring
individual site and multi-site aggregated QoC indicators to guide effective management
of subnational training and QI initiatives (e.g. using small multiple run charts – see
Chapter 5).

It is important that national and subnational managers invest in the development or


adaptation of robust QI training materials and a pool of expert QI trainers. Many global
MNCAH QI training resources focus primarily on QI skills needed by front-line QI team
members (41,42,61–66). Unfortunately, there are fewer training materials that address
the broader set of QI skills needed by managers supporting a subnational (e.g. district-
wide) or organization-wide QI initiative (e.g. large hospital).

7.4.2 Incorporate QI measurement into mentoring and supportive


supervision
Mentoring after training to reinforce and deepen QI skills is an important best practice
since QI skills, like clinical skills, often decay after initial training if not reinforced and
used on a regular basis. In many settings, supervision is infrequent and highly structured,
and it may be difficult to incorporate mentoring to reinforce skills into supervision visits.
However, when supervisors possess the necessary skills and have the necessary flexibility,
supervision visits can be an important platform to reinforce health worker QI skills. Ideally,
supervision visits should include a review of local QoC indicator results and support
to QI teams. Other options for reinforcing QI skills include periodic mentoring by local
experts (ideally, experts who led the initial training and have strong mastery of skills).

100 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
For example, experts from local institutions, organizations, professional associations
and other entities can support training and follow-up mentoring in many settings. Follow
up mentoring can be provided via blended virtual and in-person approaches. In some
7
Strengthening
settings, subnational QI initiatives organize periodic learning meetings across sites to quality
share and spread learning; such learning meetings can be an efficient mechanism to improvement
reinforce health worker QI skills. measurement
capability
7.4.3 Strengthen professional development platforms
In many countries, health workers must participate in continuing professional
development (CPD) activities to maintain their professional licensure. Requiring a
certain proportion of CPD activities or points to be focused on QI skills is one approach
to strengthen and reinforce skills, provided that CPD activities are of high-quality. CPD
activities may encompass a range of modalities including virtual self-directed learning
materials, training workshops in professional conferences, virtual synchronous and
asynchronous training courses leading to certification, mentored practicums, and others.
It is important that CPD activities and materials be well vetted by experts to ensure that
materials are of high quailty, apply adult learning best practices and focus on important QI
competencies. In some settings, CPD activities and materials are regulated by professional
associations.

Table 13. Summary of key actions for national, subnational and facility
levels

National Subnational Facility


• Define QI measurement • Assess QI measurement • Incorporate regular report-
skills needed by specific knowledge and skills of outs of QoC results by QI
health worker cadres health workers based teams as part of facility
• Develop or adapt practical on defined needed QI management meetings
competency-based QI competencies • Support and finance health
curriculum and tools • Develop, finance and workers’ participation
• Define strategies to oversee costed subnational in training and CPD
build health worker QI plans to build and reinforce opportunities focused
measurement skills at essential QI skills (e.g. via on QI skills, including
scale (e.g. via pre-service training, supervision and measurement (virtual and
education, in-service other activities integrated in-person)
training and supervision into annual operational • Promote access to experts
platforms) plans) to support QI team and
• Promote a non-punitive • Coach/supervise QI teams individual health worker
culture that encourages to retain and deepen QI learning as needed
open sharing of local QoC measurement skills • Recognize QI teams and
results by managers • Identify and engage experts individual health workers
• Develop and certify to support training and who demonstrate QI
continuing professional mentoring measurement leadership
development (CPD) QI • Recognize high-performing and skills
skills-building materials QI teams that demonstrate
• Consider linking CPD QI strong measurement skills
activities to maintenance of and solicit their support as
licensure mentors to other QI teams
• Invest in the development • Incorporate skills-building
and maintenance of a pool into established standing
of expert QI trainers meetings

7. Strengthening quality improvement measurement capability of key actors 101


7 7.5 Country example
This case study describes the approaches used to strengthen QI capabilities of key actors
Strengthening in a large-scale child health care improvement initiative in Ghana (this initiative is also
quality
improvement the focus of the case study in chapter 6).
measurement
capability

Project Fives Alive! in Ghana


Project Fives Alive! (PFA!) was implemented in Ghana as a partnership between the
National Catholic Health Service, the Ghana Health Service, and the Institute for
Healthcare Improvement from 2008 to 2015. PFA! strengthened the use of quality
improvement methods and approaches to accelerate reduction of preventable
deaths among children less than five years of age, progressively covering about 30%
of districts and 80% of public sector hospitals in Ghana. Between 2016 and 2021,
through additional support from the Bill & Melinda Gates Foundation, the Institute
for Healthcare Improvement and Ubora Quality Institute partnered with the Ministry
of Health to develop and implement a Ghana Healthcare Quality Strategy.

Given the critical role of reliable data and measurement in quality improvement
efforts, building improvement and measurement knowledge and skills of actors
at multiple system levels was integral to PFA!’s approaches. Training approaches
employed a variety of strategies to build skills for two primary roles: Improvement
advisors and improvement coaches. 10–15 improvement advisors responsible for
technical oversight of all PFA! activities received intensive training in the science of
improvement, including systems thinking, data and variation, theory of knowledge
and the psychology of human behaviour. This sequential training was provided
over a nine-month period and included three weeks of classroom-based learning
interspersed with practical on-the-job training in the field, periodic in-person and
virtual sessions to apply key concepts and receive peer feedback and one-on-one
coaching from experts. These highly trained improvement advisors were leveraged
by the project team to drive large system change including visioning, design and
adaptation, and teaching and training.

As PFA! went to scale, 400 regional, district, facility managers and QI team managers
employed in the Ghana Health Service were trained as improvement coaches
to provide more intensive support to local activities. PFA! designed a 6–8 week
progressive training course to train these staff, including three days of in-person
instructional time followed by regular facility-based mentoring and learning. With
minimal theoretical concepts, the training focused on practical skills, removing the
requirement for laptops, and emphasizing the plotting and analysis of QoC indicator
results using paper-based run charts. The training focused on the practical application
of five basic QI tools including fishbone analysis, process maps, 5 Whys, Plan–Do–
Study–Act cycles, and Pareto charts. Improvement coaches were mentored by experts
and learned on the job, visiting facility QI teams on a monthly basis to support their
use of local data to assess the effect of changes they were making based on QoC
indicator results.

102 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Brought into the centre of QI teams, information officers became critical decision
support analysts to other QI team members as they often led the process of plotting
7
and interpreting annotated run charts to identify successful, feasible and scalable Strengthening
changes based on testing. Additionally, PFA! created a learning network of health quality
improvement
information officers that helped build the skills, confidence, and status of health measurement
information officers at facility and district levels. Learning from PFA! highlights the capability
importance of targeting different knowledge and skills tailored to the roles of specific
actors and the importance of approaches that strengthen and regularly reinforce
skills over time emphasizing on-the-job skills-building and regular peer to peer
learning. Support of senior managers was critical to spread and sustainability. At the
end of the project, an independent evaluation concluded that “the initial investment
in capacity building had proved to be cost effective”.

7. Strengthening quality improvement measurement capability of key actors 103


References

1. Improving the quality of care for maternal, newborn and child health: implementation
guide for national, district and facility levels. Geneva: World Health Organization; 2022
(https://iris.who.int/handle/10665/353738).
2. Global meeting of the Network for Improving Quality of Care for Maternal, Newborn
and Child Health. 14–16 March 2023, Accra, Ghana. Meeting report. Geneva: Network
for Improving Quality of Care for Maternal, Newborn and Child Health; 2023 (https://
qualityofcarenetwork.org/knowledge-library/report-third-global-meeting-network-
improving-quality-care-maternal-newborn-and, accessed 27 November 2024).
3. Mother and Newborn Information for Tracking Outcomes and Results (MoNITOR)
[website]. Geneva: World Health Organization; 2024 (https://www.who.int/groups/mother-
and-newborn-information-for-tracking-outcomes-and-results-(monitor), accessed 27
November 2024).
4. Child Health Accountability Tracking (CHAT) [website]. Geneva: World Health Organization;
2024. (https://www.who.int/groups/child-health-accountability-tracking-technical-
advisory-group, accessed 27 November 2024).
5. Health information system landscape assessment (HISLA): a tool for assessing the
feasibility of collecting, reporting, and using quality of care indicators. Geneva: World
Health Organization; 2025 (https://cdn-auth-cms.who.int/media-aut/docs/default-source/
mca-documents/qoc/hq-2024-01088--web-annex-a.xlsx?sfvrsn=24cf3efd_7)
6. Kruk ME, Gage AD, Arsenault A, Jordan K, Leslie HH, Roder-DeWan S et al. High-quality
health systems in the Sustainable Development Goals era: time for a revolution. The Lancet
Global Health Commission. The Lancet Global Health. 2018;6(11):1196–1252.
7. Quality of Care [website]. Geneva: World Health Organization; 2024 (https://www.who.int/
health-topics/quality-of-care, accessed 27 November 2024).
8. Delivering quality health services: A global imperative for universal health coverage.
Geneva: World Health Organization; 2018 (https://iris.who.int/handle/10665/272465).
9. Kudzma EC. Florence Nightingale and healthcare reform. Nurs Sci Q. 2006.19(1):61–64.
doi:10.1177/0894318405283556.
10. Donabedian A, 1966. Evaluating the quality of medical care. Milbank Q, 83(4), p. 691–729.
11. Donabedian A. The quality of care. How can it be assessed? Archives of Pathology &
Laboratory Medicine. 1997;121(11):1145–50.
12. Juran JM. The quality trilogy: a universal approach to managing for quality. Wilton: Juran
Institute, Inc. 1989
13. Quality of care for maternal and newborn health: a monitoring framework for network
countries. Geneva: Network for Improving Quality of Care for Maternal, Newborn and
Child Health; 2019 (https://cdn.who.int/media/docs/default-source/mca-documents/
qoc/qed-quality-of-care-for-maternal-and-newborn-health-a-monitoring-framework-for-
network-countries.pdf, accessed 27 November 2024).
14. The Network for Improving Quality of Care for Maternal, Newborn and Child Health (Quality
of Care Network) Geneva: World Health Organization; 2024 (https://www.who.int/groups/
Quality-of-care-network, accessed 27 November 2024).
15. Standards for improving quality of maternal and newborn care in health facilities. Geneva:
World Health Organization; 2016 (https://iris.who.int/handle/10665/249155).

104 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
16. Standards for improving the quality of care for children and young adolescents
in health facilities. Geneva: World Health Organization; 2018 (https://iris.who.int/
handle/10665/272346).
17. Standards for improving quality of care for small and sick newborns in health facilities.
Geneva: World Health Organization; 2020 (https://iris.who.int/handle/10665/334126).
18. Langley G, Moen R, Nolan K, Nolan T, Norman C, Provost L. The improvement guide: A
practical approach to enhancing organizational performance. San Francisco, CA: Jossey-
Bas; 2009.
19. Maternal, newborn, child, adolescent health and ageing and quality of care indicator
metadata toolkit [online database]. Geneva: World Health Organization; 2024 (https://
platform.who.int/data/maternal-newborn-child-adolescent-ageing/indicator-toolkit/
adolescent-health-indicators, accessed 19 June 2024.
20. Heywood A, Rohde J. Using information for action. A manual for health workers at facility
level. Pretoria: Equity Project; 2002.
21. Day LT, Ruysen H, Gordeev VS, Gore-Langton GR, Boggs D, Cousens S et al. “Every Newborn–
BIRTH” protocol: observational study validating indicators for coverage and quality of
maternal and newborn health care in Bangladesh, Nepal and Tanzania. J Glob Health.
2019;9(1): 010902. doi:10.7189/jogh.09.010902.
22. Toolkit on monitoring health systems strengthening. Health information systems. World
Health Organization; 2008 (https://www.who.int/publications/m/item/health-information-
systems, accessed 27 November 2024).
23. Terms related to HIS. Health Information Systems Strengthening Resource Center
[website]. Chapel Hill: MEASURE Evaluation; 2023 ( https://www.measureevaluation.org/
his-strengthening-resource-center/his-definitions/terms-related-to-his.html, accessed 27
November 2024).
24. Toolkit for analysis and use of routine health facility data: general principles. Geneva:
World Health Organization; 2021 (https://cdn.who.int/media/docs/default-source/world-
health-data-platform/rhis-modules/general-principles-2021–01–21-final.pdf, accessed 27
November 2024).
25. OpenHIE Health management information system (HMIS) [website]. OpenHIE (https://
guides.ohie.org/arch-spec/openhie-component-specifications-1/openhie-health-
management-information-system-hmis, accessed 27 November 2024).
26. Brook E, World Health Organization. The current and future use of registers in health
information systems. Geneva: World Health Organization; 1974 (https://iris.who.int/
handle/10665/36936).
27. Guideline on registry-based studies. Amsterdam: European Medicines Agency; 2021
(https://www.ema.europa.eu/en/guideline-registry-based-studies-scientific-guideline,
accessed 27 November 2024).
28. Privacy framework. Washington DC: National Institute of Standards and Technology, U.S.
Department of Commerce; 2020 (https://www.nist.gov/privacy-framework, accessed 27
November 2024).
29. MacFeely S. In search of the data revolution: Has the official statistics paradigm shifted?
Stat J IAOS. 202;36(4):1075–94. doi:10.3233/SJI-200662.
30. SDMX Content-oriented guidelines. Metadata common vocabulary. Statistical Data and
Metadata eXchange; 2006 (https://sdmx.org/wp-content/uploads/Content_04_Draft_
Guidelines_Metadata_Common_Vocabulary-MARCH-2006–1.pdf, accessed 27 November
2024).
31. Elmasri R, Navathe SB. Fundamentals of database systems. 7th ed. Harlow: Pearson; 2016.
32. Rowley J. The wisdom hierarchy: representations of the DIKW hierarchy. J Info. Sci.
2007;33(2):163–180.
33. Framework and standards for country health information systems (2nd ed). Geneva: World
Health Organization; 2008 (https://iris.who.int/handle/10665/43872).

References 105
34. Every Newborn – Measurement improvement for newborn and stillbirth indicators
(EN-MINI-PRISM) tools for routine health information systems. Chapel Hill: Date for Impact;
2023 (https://www.data4impactproject.org/wp-content/uploads/2023/04/EN-MINI-PRISM-
Tools-2.0_TL-23–102-D4I_508.pdf, accessed 27 November 2024).
35. Analysis and use of health facility data: guidance for maternal, newborn, child and
adolescent health programme managers. Geneva: World Health Organization; 2023
(https://iris.who.int/handle/10665/373826).
36. Amayo N A, 2024. Approaches to and experiences in standardizing health facility data
collection and reporting tools in Kenya. Presented at: World Health Organization Global
consultation on the standardization of health facility data capture and reporting forms for
maternal, newborn, and child health, and links to home-based records (Geneva, 17–19
September 2024).
37. How to Guide for quality improvement. Johannesburg: The Aurum Institute; 2019 (https://
online.fliphtml5.com/hgjjt/nchh/, accessed 27 November 2024).
38. Brady PW, Tchou MJ, Ambroggio L, Schondelmeyer AC, Shaughnessy EE. Displaying and
Analyzing Quality Improvement Data. J Pediatric Infect Dis Soc. 2018 May 15;7(2):100–103.
doi:10.1093/jpids/pix077.
39. Perla RJ; Orovost LP; Murray SK. The run chart: a simple analytical tool for learning
from variation in healthcare processes. BMJ Qual Saf. 2011;20:46–51. doi:10.1136/
bmjqs.2009.037895.
40. Provost P L; Murray K S. The health care data guide: Learning from data for improvement.
2nd ed. Hoboken: Jossey-Bass; 2022.
41. Tips and tools for learning improvement. Chevy Chase: Applying Science to Strengthen and
Improve Systems (ASSIST) Project/United States Agency for International Development;
2017 (https://www.urc-chs.com/wp-content/uploads/urc-assist-tips-tools-learning-
improvement.pdf, accessed 27 November 2024).
42. Whitley E; Ball J. Statistics review 1: Presenting and summarising data. Crit Care. 2002;6(1):
66–71. doi:10.1186/cc1455.
43. Run Chart Part 2: Interpretation of run chart data. St Leonards: Clinical Excellence
Commission; 2021 (https://youtu.be/UvitAnmnx6I?si=e-sQ9abCDzJSW-eX, accessed 27
November 2024).
44. Dehlendorf C, Henderson JT, Vittinghoff E, Steinauer J, Hessler D. Development of
a patient-reported measure of the interpersonal quality of family planning care.
Contraception. 2018;97(1):34–40. doi:10.1016/j.contraception.2017.09.005.
45. Bietsch K, Sonneveldt E. What does the Method Information Index tell us about quality of
service? Glastonbury: Avenir Health; 2020 (https://www.track20.org/download/pdf/MII_
Poster_Portrait_101418.pdf, accessed 27 November 2024).
46. Data quality assurance: module 1: framework and metrics. Geneva: World Health
Organization; 2022 (https://iris.who.int/handle/10665/366086).
47. Data quality assurance: module 2: discrete desk review of data quality. Geneva: World
Health Organization; 2022 (https://iris.who.int/handle/10665/365642).
48. Data quality assurance: module 3: site assessment of data quality: data verification
and system assessment. Geneva: World Health Organization; 2022 (https://iris.who.int/
handle/10665/365643).
49. District data quality assurance: a training package for monthly use of DHIS2 data quality
dashboards at district and health facility levels. Geneva: World Health Organization; 2022
(https://iris.who.int/handle/10665/365745).
50. AbouZahr C, Boerma T, Hogan D. Global estimates of country health indicators: useful,
unnecessary, inevitable? Glob Health Action. 2017 ;10(sup1):1290370. doi:10.1080/1654971
6.2017.1290370.
51. Stevens GA, Alkema L, Black RE, Boerma JT, Collins GS, Ezzati M, et al. Guidelines for
accurate and transparent health estimates reporting: the gather statement. Lancet.
2016 ;388:e19–e23.

106 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
52. Byass P, de Courten M, Graham WJ, Laflamme L, McCaw-Binns A, Sankohet OA, et
al. Reflections on the global burden of disease 2010 estimates. PLoS Med. 2013;
10(7):e1001477. doi:10.1371/journal.pmed.1001477.
53. Alegana VA, Okiro EA, Snow RW. Routine data for malaria morbidity estimation in Africa:
challenges and prospects. BMC Medicine. 2020;18(121). doi:10.1186/s12916–020–01593-y.
54. Goldhill DR, Summer A. Data accuracy and outcome prediction. Anaesthesia.
1998;53(10):937–943. doi:10.1046/j.1365–2044.1998.00534.x.
55. Lorenzoni L, Da Cas R, Aparo UL. The quality of abstracting medical information from the
medical record: The impact of training programmes. Int J Qual Health C. 1999;11(3):209–
213. doi:10.1093/intqhc/11.3.209.
56. Horbar JD, Leahy KA, 1995. An assessment of data quality in the Vermont-
Oxford Trials Network database. Control Clin Trials. 1995;16(1):51–61.
doi:10.1016/0197–2456(94)00019-y.
57. Arts DGT, De Keizer NF, Scheffer GJ, 2002. Defining and improving data quality in medical
registries: a literature review, case study, and generic framework. J Am Med Inform Assoc.
2002;9:600–611. doi:10.1197/jamia.m1087.
58. Kumah A, Nwogu CN, Issah A-R, Obot E, Kanamitie DT, Sifa JS, et al. Cause-and-effect
(fishbone) diagram: A tool for generating and organizing quality improvement ideas. Glob J
Qual Saf Healthc. 2024;7(2):85–87.
59. Serrat O, The ‘Five Whys’ technique. In: Knowledge solutions: tools, methods and
approaches to drive organizational performance. Singapore: Springer; 2017.
60. Sammut PR, Bonnici T. Pareto Analysis. In: Wiley Encyclopedia of Management. Hoboken:
John Wiley & Sons; 2015.
61. Five key lessons on building improvement capability. London: The Health Foundation; 2015
(https://www.health.org.uk/newsletter-feature/five-key-lessons-building-improvement-
capability, accessed 27 November 2024).
62. Improving the quality of care for mothers, newborns and children in health facilities:
learner manual. Version 3. New Delhi: World Health Organization. Regional Office for South-
East Asia (https://iris.who.int/handle/10665/331665).
63. Improving care of mothers and babies. A guide for improvement teams. [Asia Version].
Survive & Thrive; 2016 (https://www.healthynewbornnetwork.org/hnn-content/uploads/
Improving-Care-of-Mothers-and-Babies_Asia-Version_Eng.-2016.pdf, accessed 27
November 2024).
64. Improving Care of Mothers and Babies A guide for improvement teams. [Africa English
Version]. Survive & Thrive; 2016 (https://www.urc-chs.com/wp-content/uploads/urc-assist-
improving-care-mothers-babies-en-africa.pdf, accessed 27 November 2024).
65. Améliorer les soins des mères et des nouveau-nés. Un guide pour les équipes
d’amélioration. [French version]. Survive & Thrive; 2016 (https://www.urc-chs.com/
wp-content/uploads/urc-assist-improving-care-mothers-babies-fr.pdf, accessed 27
November 2024).
66. Coaching guide. New Delhi: Point of Care Quality Improvement (https://www.pocqi.org/
wp-content/uploads/2018/07/Coaching-guide.pdf, accessed 27 November 2024).

References 107
108
Annex. Detailed metadata for core MNCH QoC indicators

Core maternal and newborn QoC indicators mapped to the QoC standards
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
1 Institutional Number of maternal Outcome/ Number of maternal Number of births Type of health Routine health Monthly
maternal deaths prior to Impact deaths prior to discharge (live & stillbirths) in facility management
mortality ratio discharge per 100 000 the health facility information
births (live and during the reporting system
stillbirths) period
2 Institutional Percentage of women Outcome/ Number of women Number of women Direct and Routine health Monthly
obstetric case who delivered at Impact who delivered at the who delivered at the Indirect management
fatality rate the facility and facility and experienced facility during the Type of health information
experienced obstetric obstetric complications reporting period facility system
complications (regardless of time of
(regardless of time of onset) and died from
onset) and died from these complications
these complications before discharge
before discharge

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
3 Pre-discharge Percentage of babies Outcome/ Number of babies born Number of babies Cause Routine health Monthly
neonatal born live in a facility Impact live in a facility who die born live in a facility Type of health management
mortality rate who die prior to prior to discharge from during the reporting facility information
(disaggregated discharge the facility (up to 28 days period system
by cause) of completed life), per
1000 live births in a given
year or period. This
excludes re-admissions
for illness.
4 Institutional Percentage of Outcome/ Number of babies Number of babies Antepartum or Routine health Monthly
stillbirth rate total institutional Impact delivered in a facility born in the facility Intrapartum management
stillbirths among all with no signs of life and (live and stillbirth) Type of health information
institutional deliveries born weighing at least during the reporting facility system
1,000 grams or after 28 period
weeks of gestation, per
1000 births (alive or dead
at birth)
5 Immediate Percentage of Process Number of women Number of women Type of health Routine health Monthly
administration women who gave who gave birth in a who gave birth in the facility management
of a uterotonic birth in a health facility who received a facility during the information
after birth for facility who received prophylactic uterotonic reporting period system
postpartum a prophylactic immediately after birth
haemorrhage uterotonic (ideally within one
prevention immediately after minute) for prevention
birth (ideally of postpartum
within one minute) haemorrhage
for postpartum
haemorrhage
prevention

Annex. Detailed metadata for core MNCH QoC indicators


109
110
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
6 Breastfeeding Percentage of babies Process Number of babies born Number of babies Type of health Routine health Monthly
initiation within born alive in a facility alive in a facility who born alive in the facility management
one hour of who are breastfed are breastfed within one facility during the information
birth within one hour of hour of birth reporting period system
birth
7 Newborns with Percentage of Process Number of babies born Total number of Type of health Routine health Monthly
birthweight babies born in a (livebirths and stillbirths) babies born in the facility management
documented health facility in in a facility in a given facility (livebirths information
a given period period with documented and stillbirths) system
with documented birthweight before during the reporting
birthweight before discharge period
discharge
8 Kangaroo Percentage of Process Number of newborns Total number of Weight Routine health Monthly
mother care newborns weighing weighing ≤ 2,000g who newborns weighing (≤1500g and management
for newborns ≤ 2,000g who are are initiated on KMC (or ≤ 2,000g during the 1500 – <2000g) information
weighing 2000 g initiated on kangaroo admitted to kangaroo reporting period Immediate/ system
or less mother care mother care unit if non-immediate
separate unit exists) Type of health
facility
9 Postpartum Percentage of women Process Number of women who Number of women Type of health Client exit Quarterly
counselling for who received pre- received pre-discharge interviewed who facility interviews
mother and discharge counselling counselling for the delivered at the
baby for the mother and mother and the baby facility during the
the baby in a given in a given period (for reporting period
period minimum elements)

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
10 Companion of Percentage of women Outcome Number of women Number of women Type of health Client exit Quarterly
choice during who wanted and who wanted and had a interviewed who facility interviews
labour and had a companion of companion supporting wanted a companion
childbirth choice supporting them during labour and during labour and
them during labour childbirth in the health childbirth who
and childbirth in the facility delivered at the
health facility facility during the
reporting period
11 Physical abuse Percentage of women Outcome Number of women Number of women Type of health Client exit Quarterly
during labour, who reported being who report physical interviewed who facility interviews
or childbirth physically abused abuse during labour or delivered at the
or postpartum anytime during childbirth facility during the
period labour, childbirth, or reporting period
postpartum period in
the health facility

(Physical abuse:
slapped, pinched, or
punched by a health
worker or other
facility staff)

Annex. Detailed metadata for core MNCH QoC indicators


111
112
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
12 Verbal abuse Percentage of women Outcome Number of women Number of women Type of health Client exit Quarterly
during labour, who reported being who report verbal interviewed who facility interviews
or childbirth, verbally abused abuse during labour or delivered at the
or postpartum anytime during childbirth facility during the
period labour, childbirth, or reporting period
postpartum period in
the health facility
(Verbal abuse:
shouted or screamed
at, insulted, scolded,
or mocked by a health
worker or other staff)
13 Basic hygiene Percentage of health Input Number of facilities in Number of facilities Type of health Health facility Quarterly
provision facilities in which [all] which [all] [at least one] assessed during the facility survey/
delivery room(s) have delivery room(s) have reporting period assessment
at least one functional at least one functional
handwashing station handwashing station
with water and soap with water and soap
available available
14 Basic sanitation Percentage of health Input Number of facilities Number of facilities Type of health Health facility Quarterly
for women and facilities with basic with basic sanitation assessed during the facility survey/
their family sanitation available available for women reporting period assessment
for women and their during and after labour
families during and and childbirth (clean
after labour and running water, waste
childbirth disposal facilities, toilets,
and sanitation material
for women)

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Core paediatric and young adolescent QoC indicators mapped to the QoC standards

Indicator Proposed Proposed data


# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
1 Institutional Number of pre- Impact Number of children Number of children Sex Routine health Monthly
child mortality discharge child deaths who died in the who visited the health Types of health management
rate per 1000 children who health facility before facility for medical facility information
visited the health facility discharge care during reporting Age groups system
period (0–7 days, 8–27,
28–59 days,
60 days-<1 year,
1-<5 years,
5-<10 years,
10–14 years)
Death within and
after 24 hours of
admission
2 In-hospital Percentage of children Outcome/ Percentage of Number of children Condition Routine health Monthly
paediatric case who were diagnosed impact children who were who visited the Levels of health management
fatality rate with sepsis, pneumonia, diagnosed with sepsis, health facility and facilities (e.g., information
by common malaria, meningitis pneumonia, malaria, were diagnosed with secondary level) system
paediatric or severe scute meningitis, or SAM sepsis, pneumonia, Age groups
conditions malnutrition (SAM) and and died in the health malaria, meningitis, or (0–7 days, 8–27,
died in the health facility facility SAM during reporting 28–59 days,
period 60 days-<1 year,
1-<5 years,
5-<10 years,
10–14 years)
Death within and
after 24 hours of
admission to the

Annex. Detailed metadata for core MNCH QoC indicators


facility

113
114
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
3 Assessment Percentage of sick Process/ Number of sick Number of sick Type of health Patient medical Monthly
for the sick children < 5 years old Output children < 5 years old children < 5 years facility records
children who were assessed who were assessed old who visited the Age
< 5 years old in the health facility based on key IMNCI health facility during (0 – <2 month,
based on the based on key integrated assessment criteria. reporting period 2 month –
integrated management of <5 years)
management newborn and childhood Sex
of newborn Illness (IMNCI) criteria
and childhood (Presence or absence
Illness criteria of danger signs (ability
to drink or breastfeed;
vomits everything;
convulsions, lethargy)
and received rapid
physical and clinical
assessment including,
weight, Z-score or
MUAC, respiratory rate,
temperature, cough,
difficult breathing/chest
indrawing, diarrhoea/
dehydration status,
vaccination status, and
palmar pallor)

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
4 Treatment Percentage of young Process/ Number of young Number of sick young Sex Patient medical Monthly
of possible infants (<2 months old) Output infants (<2 months infants (<2 months Weight cut-offs records
severe bacterial classified as having old) classified as old) classified as (<2000g, ≥2000g)
infection at possible severe bacterial having PSBI or any having PSBI or any
outpatient level infection (PSBI) or signs child with related signs child with related
of PSBI, or very severe or very severe disease signs or very severe
disease or sepsis – or sepsis, who were disease or sepsis who
who were prescribed prescribed appropriate visited health facility
appropriate antibiotics antibiotics according during reporting
according to WHO to WHO guidelines period
guidelines.
(Signs of PSBI: Movement
only when stimulated
or no movement
at all, not feeding
well on observation,
temperature greater
than or equal to 38°C
or less than 35.5°C,
severe chest in-drawing,
convulsions, fast
breathing (60 breaths
per minute or more) in
infants less than 7 days
old)
5 Kangaroo Percentage of newborns Process/ Number of newborns Number of newborns Type of health Routine health Monthly
mother care weighing ≤ 2,000g Output weighing ≤ 2,000g weighing ≤ 2,000g facility management
for newborns who are initiated on who are initiated on who were born in Sex information
weighing 2000 g Kangaroo mother care Kangaroo mother care or presented to the Weight system

Annex. Detailed metadata for core MNCH QoC indicators


or less (KMC) according to WHO (KMC) according to health facility during (≤1500g and

115
guidelines WHO guidelines reporting period 1500 – <2000g)
116
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
6 Pneumonia Percentage of children Process/ Number of children Number of children Sex Routine health Monthly
treatment aged between 7 days Output aged between 7 days aged between 7 days Type of health management
with 1st choice and 5 years who were and 5 years who and 5 years seen in facility information
antibiotic for prescribed amoxicillin were diagnosed with the health facility with Age (7–59 days, system
children aged for treatment of pneumonia or showed pneumonia or fast 2 month –
between 7 days pneumonia signs of fast breathing breathing and/or chest <5 years)
and 5 years and/or chest indrawing indrawing during
and were prescribed reporting period
oral amoxicillin
7 Management Percentage of children Process/ Number of children Number of children Age (0–59 days, Routine health Monthly
of acute watery < 5 years old diagnosed Output < 5 years who were < 5 years old with a 2 months – management
diarrhoea with acute watery diagnosed with acute diagnosis of acute <5 years) information
among children diarrhoea in a health watery diarrhoea and watery diarrhoea who Sex system
<5 years old facility who received received appropriate visited health facility Type if health
appropriate treatment treatment for facility
for diarrhoea [Oral diarrhoea
rehydration solution
(ORS) and Zinc
supplementation if
2 months- <5 years]

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
8 Children Percentage of children Process/ Number of children Number of Type of health Routine health Monthly
and young and young adolescents Output and young adolescents children and facility management
adolescents (< 15 years old) in (< 15 years old) in young adolescents Sex information
with malaria endemic malaria endemic (< 15 years old) in Diagnosis system
documented areas who presented areas who presented malaria endemic Age (<1 year,
malaria test to the health facility to the health facility area who visited the 1 – < 5 years,
results with fever and their with fever and their health facility with 5 – 9 years,
malaria test results are malaria test results are fever during reporting 10–14 years)
available (results from available period
microscopy or malaria
Rapid Diagnostic Test).

Note: According to WHO,


malaria endemic areas
are areas in which
there is an ongoing,
measurable incidence
of malaria infection
and mosquito-borne
transmission over a
succession of years.
9 Treatment of Percentage of children Process/ Number of children Number of children Sex Routine health Monthly
uncomplicated aged between 6 months Output aged between 6 month aged between 6 month management
severe acute and 5 years with and 5 years with and 5 years diagnosed information
malnutrition uncomplicated severe uncomplicated SAM with uncomplicated system
acute malnutrition who were treated SAM who visited
(SAM) who were treated according to WHO health facility during
according to WHO guidelines reporting period
guidelines

Annex. Detailed metadata for core MNCH QoC indicators


117
118
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
10 Management Percentage of children Process/ Number of children Number of Sex Routine health Monthly
of anaemia and young adolescents Output and young adolescents children and Age groups management
in children (2 months ≤ 15 years (2 months ≤ 15 years young adolescents (<1 year, information
and young old) with anaemia who old) diagnosed with (2 months ≤ 15 years 1 – < 5 years, system
adolescents were treated according anaemia who were old) diagnosed with 5 – 9 years,
to WHO guidelines (Iron treated according to anaemia who visited 10–14 years)
and mebendazole if WHO guidelines. health facility
1 year or older and not
given mebendazole last
6 months)
11 Children Percentage of children Process/ Number of children Number of children Sex of the child Routine health Monthly
< 2 years of age < 2 years of age for Output < 2 years of age for <2 years old who Mother or child management
with known HIV whom the HIV status of whom the HIV status of visited the health status known information
status for either the mother and/or the the mother and/or the facility during system
the mother and/ child is known (positive child is known reporting period
or the child or negative)
12 TB evaluation Percentage of children Process/ Number of children Number of Sex Routine health Monthly
for children and young adolescents Output and young adolescents children and young Age groups management
and young (<15 years old) eligible (<15 years old) eligible adolescents (<15 years (<1 year, information
adolescents for TB screening who for TB screening old) eligible for TB 1 – < 5 years, system
with were referred or further who were referred or screening who visited 5 – 9 years,
presumptive TB assessed for TB (TB further assessed for TB health facility during 10–14 years)
screening eligibility: reporting period
reported a cough
duration >14 days or
were diagnosed with
SAM or had confirmed
HIV infection)

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
13 Catch up Percentage of children Process/ Number of children Number of children Sex Routine health Monthly
immunization < 2 years of age eligible Output < 2 years of age eligible < 2 years of age Age (<1 year, management
for children for DTP-Hep-B-HIB, IPV, for DTP-Hep-B-HIB, eligible for DTP-Hep-B- 1-<2 years) information
< 2 years old RTV, PCV or Measles- IPV, RTV, PCV and HIB, IPV, RTV, PCV and Type of antigen system
containing vaccine, who measles-containing measles-containing
received all catch up vaccine who were vaccine who received
immunization during administered all catch medical care in the
medical visits (Eligibility: up immunization health facility during
unvaccinated or partially reporting period
vaccinated with these
vaccines according to
their age and national
immunization schedule)
14 Inappropriate Percentage of children Process Number of children Number of Type of health Patient medical Monthly
use of and young adolescents and young adolescents children and young facility records
antibiotics for (<15 years old) seen in (<15 years old) seen in adolescents (<15 years Sex
cough or cold a health facility with a health facility with old) seen in a health Diagnosis
in children a cough or cold, or an a cough or cold, or an facility during the
and young unspecified respiratory unspecified respiratory reporting period
adolescents tract infection, and tract infection, and with a cough or cold,
without a comorbidity without a comorbidity or an unspecified
requiring antibiotic requiring antibiotic respiratory tract
treatment (e.g., treatment (e.g., infection, and without
pneumonia, severe pneumonia, severe a comorbidity
pneumonia, severe pneumonia, severe requiring antibiotic
acute malnutrition, very acute malnutrition, treatment (e.g.,
severe disease, sepsis, very severe disease, pneumonia, severe
meningitis, dysentery, sepsis, meningitis, pneumonia, severe
cholera, HIV+) who were dysentery, cholera, acute malnutrition,

Annex. Detailed metadata for core MNCH QoC indicators


prescribed an antibiotic. HIV+) who were very severe disease,
prescribed an sepsis, meningitis,

119
antibiotic. dysentery, cholera,
HIV+)
120
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
15 Completion Percentage of children Input Number of children Number of Type of health Health facility Monthly
of medical and young adolescents and young adolescents children and young facility register
documentation (<15 years old) seen in (<15 years old) seen in adolescents (<15 years Sex
for children a health facility with a health facility with old) seen in a health Age groups
and young complete key patient complete key patient facility during (<1 year,
adolescents information in the information in the reporting period 1 – < 5 years,
health facility register health facility register 5 – 9 years,
(patient demographic 10–14 years)
data, assessment
findings, classification
/ diagnosis, treatment,
counselling, and care
outcomes)
16 Quality of care Percentage of health Process/ Number of health Number of facilities Type of health Survey Quarterly
data reviews facilities that conducted Output facilities that assessed during the facilities
for children monthly quality of care conducted monthly reporting period
and young data reviews for patients quality of care data
adolescents under 15 years old in the reviews for patients
past 3 months under 15 years old in
the past 3 months
17 Knowledge and Percentage of children Process/ Number of children children and young Respondent Facility survey, Quarterly
understanding and young adolescents Output and young adolescents adolescents (<15 years Service level client exit
of the condition (<15 years old) or their (<15 years old) or their old) or their caregivers (inpatient, interviews,
and treatment caregivers who can caregivers who can who were interviewed outpatient) or similar
plan among describe the child’s describe the child’s during reporting Type of health assessments
children condition and how condition and how period facility
and young to administer home to administer home
adolescents or treatment treatment
their caregivers

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
18 Satisfaction Percentage of children Outcome Number of children Number of Type of health Facility survey, Quarterly
with decision- and young adolescents (patient- and young adolescents children and young facilities client exit
making process (<15 years old) or reported) (<15 years old) or adolescents (<15 years Respondent interviews,
for care their caregivers who their caregivers who old) or their caregivers Health condition or similar
are satisfied with the are satisfied with the who were interviewed assessments
care decision-making care decision-making during reporting
process process period
19 Pre-discharge Percentage of caregivers Process/ Number of caregivers Number of caregivers Type of health Facility survey, Quarterly
counselling on of children under 5 years Output of children under of children < 5 years facility client exit
danger signs old who are aware of 5 years old who are who received care and interviews,
and feeding the danger signs of aware of the danger were interviewed in or similar
for children paediatric illness, when signs of paediatric health facility during assessments
< 5 years old to seek care, and how to illness, when to reporting period
manage feeding during seek care, and how
illness to manage feeding
during illness
20 Awareness of Percentage of children Process/ Number of children Number children and Respondent Facility survey, Quarterly
child rights and young adolescents Output and young adolescents young adolescents Age groups client exit
during health (<15 years old) or their (<15 years old) or (<15 years old) or (<1 year, interviews,
care caregivers who reported their caregivers their caregivers were 1 – < 5 years, or similar
being adequately who reported being interviewed during 5 – 9 years, assessments
informed about their adequately informed reporting period 10–14 years)
rights to care (e.g., free about their rights to Type of health
treatment, medication, care facility
food, bedding, room-in
etc.)

Annex. Detailed metadata for core MNCH QoC indicators


121
122
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
21 Disrespectful Percentage of children Outcome Number of children Number of Age groups Survey or Quarterly
care for and young adolescents (patient- and young adolescents children and young (<1 year, interview
the child or (<15 years old) or their reported) (<15 years old) or adolescents (<15 years 1 – < 5 years, records
caregiver caregivers who reported their caregivers who old) or their caregivers 5 – 9 years,
mistreatment during reported mistreatment who received care and 10–14 years)
care (Includes those during car were interviewed in Type of health
who felt that they health facility during facility
were being yelled at reporting period Type of
or screamed at (verbal mistreatment
abuse), or were hit, Respondent
or pinched (physical
abuse)
22 Accompaniment Percentage of children Process/ Number of children Number of Type of health Survey or Quarterly
during care and young adolescents Output and young adolescents children and young facility interview
(<15 years old) whose (<15 years old) whose adolescents (<15 years records
caregivers were able caregivers were able old) or their caregivers
to accompany them to accompany them who received care and
during minor medical during minor medical were interviewed in
procedures procedures health facility during
reporting period

Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
23 Access to play Percentage of children Input Number of children (or Number of children Age groups Survey or Quarterly
and educational (or their caregivers) their caregivers) who treated as inpatient (<1 year, interview
material during who reported that the reported that the child or their caregiver who 1 – < 5 years, records
hospitalization child was able to play was able to play and were interviewed in 5 – 9 years,
and access educational access educational health facility during 10–14 years)
materials during material during reporting period Type of health
hospitalization hospitalization facility
24 Clinical Percentage of health Input Number of health Number of health Provider cadre, Provider Quarterly
mentorship workers providing workers providing workers providing care Facility type interviews
or training care for children care for children for children who were
for childcare who received clinical who received clinical interviewed during
providers mentorship or training mentorship or training reporting period
in the last 6 months in the last 6 months

25 Stock out Number of days in the Input Total number of days N/A Inpatient / Inventory of the Quarterly
of essential past 3 months when with stock outs of at outpatient pharmacy or
child health there were stock outs least three essential dispensary
medicines of at least 3 essential medicines
children medicines
(amoxicillin, injectable
gentamicin, and zinc)

Annex. Detailed metadata for core MNCH QoC indicators


123
World Health Organization
Department of Maternal, Newborn, Child and
Adolescent Health and Ageing
Avenu Appia 20
CH -1211 Geneva 27
Switzerland
email: mncah@who.int
website: https://www.who.int/teams/maternal-
newborn-child-adolescent-health-and-ageing

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy