Technical Guideline Monitoring QoC RMNCAH
Technical Guideline Monitoring QoC RMNCAH
The mention of specific companies or of certain manufacturers’ products does not imply that they are endorsed
or recommended by WHO in preference to others of a similar nature that are not mentioned. Errors and omissions
excepted, the names of proprietary products are distinguished by initial capital letters.
All reasonable precautions have been taken by WHO to verify the information contained in this publication.
However, the published material is being distributed without warranty of any kind, either expressed or implied.
The responsibility for the interpretation and use of the material lies with the reader. In no event shall WHO be liable
for damages arising from its use.
Editing and design by Inis Communication
Contents
Acknowledgements v
Abbreviations viii
iii
4.4 Practical guidance 37
4.5 Country example 51
References 104
iv Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Acknowledgements
The World Health Organization (WHO) gratefully acknowledges the role of the following
individuals, organizations and governments that contributed to the conception, technical
writing, peer-review and finalization of this guide.
WHO leadership
Moïse Muzigaba, from the WHO Department of Maternal, Newborn, Child, and Adolescent
Health and Ageing (MCA) served as the responsible technical officer and coordinated the
development of this guide under the oversight of Theresa Diaz (MCA).
Lead writers
This guide was conceived and primarily written by Moïse Muzigaba (MCA) and Kathleen
Hill (United States Agency for International Development (USAID) MOMENTUM Country
and Global Leadership Project and Jhpiego, United States of America).
Contributors
External experts
Sincere gratitude goes to the following experts who were involved in the initial
conceptualization and drafting of some sections of the guide: Sodzi Sodzi-Tettey (USAID
MOMENTUM Country and Global Leadership Program and the Institute for Healthcare
Improvement, Ghana), Stephen Luna-Muse (USAID MOMENTUM Country and Global
Leadership Program and the Institute for Healthcare Improvement, United States of
America), Debra Jackson (London School of Hygiene and Tropical Medicine, and the
University of the Western Cape, United Kingdom of Great Britain and Northern Ireland),
and Tricia Bolender (USAID MOMENTUM Country and Global Leadership Program and
the Institute for Healthcare Improvement, United States of America).
Special gratitude is extended to the following WHO staff from various regional and country
offices, as well as a consultant, for their invaluable contributions to the development,
refinement and validation of the Health information system landscape assessment (HISLA)
tool: Josephine Agyeman-Duah (WHO consultant), Shogo Kubota (WHO Regional Office
for the Western Pacific), Delgermaa Vanya (WHO Regional Office for the Western Pacific),
Zhao Li (WHO Regional Office for the Western Pacific), Ogusa Shibata (WHO Regional
Office for the Western Pacific), Justice Sitsofe Yevugah (WHO country office in Sierra
Leone), Makeba Shiroya (WHO country office in Kenya), Kenneth Mutesasira (WHO country
office in Uganda), Bongomin Bodo (WHO country office in Uganda), Kurabachew Alemu
(WHO country office in Uganda), Susan Kambale (WHO country office in Malawi), Solome
Nampewo (WHO country office in Malawi), Teshome Desta Woldehanna (retired staff, WHO
Regional Office for Africa), Leonard Cosmas (WHO country office in Kenya) and Assumpta
Muriithi (retired staff, WHO Regional Office for Africa).
Acknowledgements v
WHO reviewers
Special thanks are extended to the following WHO reviewers from its headquarters and
regional and country offices for their valuable independent technical review of the guide:
Blerta Maliqi (WHO Department of Integrated Health Services), Elizabeth Katwan (MCA),
Jean Pierre Monet (MCA), Sonali Vaid (WHO Regional Office for the Western Pacific), Hillary
Kipruto Kipchumba (WHO Regional Office for Africa) and Binyam Hailu Getachew (WHO
country office in Sierra Leone).
WHO is grateful to the following expert peer reviewers, in alphabetical order, for their
independent technical review of the content and organization of the guide: Aluvaala
Jalemba (School of Medicine, University of Nairobi, Kenya), Eyob Gebretsadik
(independent expert, Ethiopia), Jil Molenaar (University of Antwerp, Belgium), Martin
Dohlsten (United Nations Children’s Fund, Nigeria), Remi Mwamba (United Nations
Children’s Fund, United States of America), and Tamar Chitashvili (John Snow Inc., United
States of America). WHO also acknowledges the valuable support of several members
of the Life Stages Quality of Care Metrics Technical Working Group (LSQM TWG), who
provided verbal feedback during oral presentations on multiple occasions throughout
the development of this guide.
Government institutions
WHO appreciates the contributions of many technical and implementing partners who
supported the Network for Improving Quality of Care (QoC) for Maternal, Newborn and
Child Health. Their support helped highlight the need for this guide and contributed to
the evidence and selected country examples included. These include, in alphabetical
order: All India Institute of Medical Sciences (AIIMS), Department for International
Development – Deutsche Gesellschaft für Internationale Zusammenarbeit, Institute
for Healthcare Improvement, Jhpiego, Japan International Cooperation Agency, KEMRI
Wellcome Trust, London School of Hygiene and Tropical Medicine, Management Sciences
for Health, Partnership for Maternal, Newborn & Child Health, Save the Children, University
College London, United Nations Population Fund, the United Nations Children’s Fund,
and University Research Co.
vi Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Assessment and management of conflicts of interest
All external experts involved in the development of this guide, including the lead and
contributing writers, submitted a signed declaration of interest to WHO, disclosing any
potential conflicts of interest that might influence or could reasonably be perceived to
influence their objectivity and independence in relation to the guide’s content. WHO
thoroughly reviewed each declaration and determined that none posed an actual or
reasonably perceived conflict of interest concerning any aspect of the guide.
Acknowledgements vii
Abbreviations
viii Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
1 About the guide
1.1 Purpose and focus of the guide
Measurement is a core principle of improving health care. Regular measurement of
selected quality of care (QoC) indicators during a time-limited quality improvement (QI)
initiative helps managers and health worker teams track progress and guide iterative
changes as they work together to improve care. This guide focuses specifically on the
measurement of QoC for the purpose of improving care. The various purposes for
measuring QoC, and implications for measurement methods, are reviewed in Chapter 2.
The guide focuses on methods for the regular measurement of QoC indicators for the
purpose of improving care. The guide does not address periodic resource-intensive
methods for assessing QoC for other purposes, such as the use of health facility
assessments for planning or quality assurance (e.g. accreditation). However, it does
reference tools such as harmonized health facility assessments and service provision
assessments that can be used to periodically assess QoC to guide planning and track
progress against global and country targets.
While the focus of the guide is on measurement for improving quality of maternal,
newborn, child and adolescent health (MNCAH) care, inclusive of nutrition interventions,
the guide can also be applied to other technical areas such as sexual and reproductive
health and healthy ageing.
1
Fig. 1. The role of measurement in applying quality improvement methods to
improve quality of care
Step Select an area of health care to improve based on important outcomes, local
Step
Step Use quality tools (e.g. fishbone, process maps, 5 whys, etc.) to understand the
3 root causes of quality of care gaps in prioritized areas and to develop change
ideas to test.
Step Select a set of quality of care indicators that will be measured to track progress
When possible, collect at least six points of baseline data based on retrospective
Step
5
analysis of routine data (if available). If not possible, rapidly collect weekly or
daily data to ascertain the current performance of the health care process(es) to
be improved (Chapter 5).
Step
6
Develop and iteratively test and implement cycles of change for identified
change ideas (e.g. Plan–Do–Study–Act).
Step Assess the effect of tested changes by monitoring and analysing patterns in
7 quality of care indicator results over time to identify signals, trends and shifts
(Chapter 5).
Step
8
Use qualitative data to assess the feasibility, acceptability and likely
sustainability of the changes in the local context.
Step If specific changes show improvement in quality of care indicators and the
9 changes are feasible and sustainable in the local context, expand the scale of
testing and adopt the changes.
Step
Continue tracking the quality of care indicators for at least six months to be sure
10
the gains are being held. Thereafter, periodically measure one or two selected
quality of care indicators to ensure that gains are maintained over time and
adjust if needed (quality control).
2 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
For the purpose of this guide, QI is defined as the process by which health workers – as
part of multi-cadre QI teams – analyse the root causes for poor QoC, develop changes to
address those causes, and iteratively test and adapt changes while regularly monitoring
selected QoC indicators.
This guide focuses on the measurement steps in Fig.1: Selection of QoC indicators
(step 4) and the use of results to track improvement and guide changes (step 7), which
are essential in a broader QI process to improve care.
As highlighted in Fig. 2, this guide focuses on the selection and use of QoC indicators to
improve care (inner areas) and on the necessary enabling system interventions that make
this measurement possible (outer areas).
Assessing Strengthening
and improving quality
data quality improvement
to strengthen quality measurement
improvement results capability
and stakeholder of key actors
trust
Chapters 3–7 share a standard format comprising six components: key messages; a
brief chapter overview; definitions of key terms and concepts used; practical guidance
including a table of key actions at national, subnational and health facility levels ; and a
country example to illustrate the application of the guidance in a real-life context.
• For whom should the guide be developed, in your view (target audience)?
• What are the top three to five QoC measurement topics that should be included in
the guide?
• What is your recommendation for how best to structure the guide?
• How long should the guide be?
4 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
The survey was administered digitally (using Survey Monkey) and shared with WHO
regional offices for distribution to their constituent countries to ensure broad reach across
WHO Member States. Responses were analysed to identify key themes, and the results
were used to inform the initial conceptualization and development of the guide.
Learning from the pilot test in Sierra Leone informed a revision of the HISLA tool and an
approach for its use. The refined tool and process of its application was subsequently
applied in Malawi and further iteratively refined and validated in Kenya and Uganda, with
a final validation phase in the Lao People’s Democratic Republic in the WHO Western
Pacific Region. During the final validation phase in Lao People’s Democratic Republic, the
tool was used to assess local HIS readiness to measure, report and use an expanded set
of MNCAH QoC indicators. The insights gained from testing and validating the HISLA tool
in Sierra Leone, Malawi, Kenya, Uganda and Lao People’s Democratic Republic informed
the version of the HISLA tool. Additional findings from the research also informed the
development of an implementation blueprint for other countries, which is detailed in
Chapter 4.
6 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 3. Target audiences for the guide
National-level actors
(e.g. representatives of different national
ministry of health directorates or divisions)
Directors and
managers
of health information and
health informatics entity
Directors
Directors and and managers
managers of the quality
of MNCAH management
programmes programmes
Tertiary-level
facilities Ministry of health
(e.g. regional, provincial,
Secondary-level district, county
facilities managers, etc.)
(Note: Health system and stakeholder terminology may vary in different settings).
At the national level, the guide can be used by representatives of the ministry of health
responsible for MNCAH programmes, HIS, quality management, and other relevant
programmes. At subnational level, HIS managers, MNCAH programme managers and
QoC programme managers can use the guide to strengthen measurement in programmes
to improve QoC. Indeed, these managers are encouraged to work together closely due
to the importance of combined measurement, QI and clinical expertise for the robust
design, implementation and monitoring of initiatives to improve QoC. At the facility level,
health facility managers, health care workers, members of facility QI teams including
community members, and other relevant actors can all use the guide to strengthen
measurement as one core foundation of their improvement work. Other users of the
guide may include managers of private health care networks (e.g. faith-based health
care networks), managers of public and private health care facilities, community health
managers, as well as local organizations and implementing partners supporting the
ministry of health with QI work.
WHO defines QoC as: “…the degree to which health services for individuals and populations
increase the likelihood of desired health outcomes and are consistent with current
professional knowledge” (7). Quality health care services should be effective, safe, people-
centred, timely, equitable, integrated and efficient (Box 1).
9
2.2 Measuring quality of care
2.2.1 The Donabedian framework
Measurement of QoC has gained increasing attention in recent decades as recognition of
the importance of QoC for health outcomes and people’s experience of care has increased.
However, there are much earlier examples of measuring quality. For example, Florence
Nightingale was a pioneer for measurement and use of statistics for improving health care
quality during the Crimean War in the 1850s. She hired data collection teams to record
causes of death among soldiers, convinced leading statisticians to help her interpret her
data to demonstrate that poor sanitation and hygiene standards for wounded soldiers
were a greater cause of mortality than the injuries themselves, and designed a new data
visualization method to convince policy-makers of the importance of her findings (9).
In 1966, Avedis Donabedian proposed a framework for measuring QoC that remains
widely used today (10) and is applied in several of the country examples in this guide.
10 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Although the relationship between inputs/structures, processes and outcomes is not
always straightforward, essential inputs or structures are pre-conditions for delivering
quality services that, combined with health care processes, influence the likelihood of
better health outcomes. Inputs must be used correctly and consistently as part of health
care processes to influence the health outcomes for which they are required. An important
proposition in the 2018 Lancet Quality Commission is that availability of inputs alone has
little to no effect on outcomes (6). Similarly, processes of care that utilize inappropriate
inputs (e.g. inappropriate medication, non-sterile surgical instruments, unskilled health
personnel, etc.) will be less likely to improve health outcomes or in some cases may
cause harm.
Quality
planning
Juran
trilogy
Quality Quality
control improvement
• Quality planning (QP): Process of defining goals for quality health care and defining
structures, strategies and activities required to achieve those goals (e.g. governance
structures, national strategies, subnational costed operational plans, etc.)
• Quality improvement (QI): Process by which local multi-cadre teams analyse the root
causes for poor QoC, develop proposed changes to address those causes, and iteratively
test and adapt changes while regularly monitoring selected QoC indicators. It should
be noted that multiple interventions, in addition to QI, contribute simultaneously to
improving QoC (e.g. policy development, clinical capacity strengthening, distribution
of skilled providers.)
• Quality control (QC): Internal process of assessing quality of care within a local system
(e.g. district, facility) to verify that performance meets standards and remains stable
(e.g. accuracy of laboratory testing, quality of care provided in a specific service, etc.).
Another term commonly used alongside QP, QI and QC is quality assurance (QA).
The common purposes of measuring QoC can be mapped to the Juran trilogy components
and QA, with implications for different measurement approaches (Table 1).
12 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Component of the Juran
Common purposes for trilogy (QP, QI, QC) plus
measuring QoC quality assurance (QA) Measurement approach
Periodically measure QC Typically, measurement is periodic
QoC to assess adherence (e.g. 2–4 times per year) and is led
with standards and to internally (e.g. by district or facility
verify that quality is managers) using routine health
being maintained once information sources, as feasible
improved
Periodically measure QA Typically, measurement is infrequent
QoC for the purpose and is led externally (e.g. by
of regulation or accreditation body) using more
accreditation resource-intensive methods
1 Core
indicators 2 Quality
improvement
indicators
3 Subnational
performance
indicators
4 Implementation
milestones
Prioritized small set of Flexible menu of Flexible menu of indicators Prioritized set of milestones
input, process, outcome prioritized indicators to to support subnational to track whether quality of
and impact indicators for support rapid managerial and care programme activities
use by all stakeholders at improvement in quality of leadership functions in are being implemented as
every level of the health care led by facility-based improving and sustaining intended, for use by quality
system to track and quality improvement quality of care in facilities of care programme
compare progress across teams and supported by managers at national and
sites and levels subnational managers subnational levels
For additional discussion of the QoC measurement needs of different actors, see section
3.4.7, which reviews the needs of different data users in a subnational QI initiative.
Core maternal and newborn QoC indicators mapped to the QoC standards (15)
1. Institutional maternal mortality ratio (disaggregated by cause)
2. Institutional obstetric case fatality rate
3. Pre-discharge neonatal mortality rate (disaggregated by cause)
4. Institutional stillbirth rate
5. Immediate administration of a uterotonic after birth for postpartum haemorrhage
prevention
6. Breastfeeding initiation within one hour of birth
7. Newborns with birthweight documented
8. Kangaroo mother care for newborns weighing 2000 g or less
9. Postpartum counselling for mother and baby
10. Companion of choice during labour and childbirth
11. Physical abuse during labour, or childbirth or postpartum period
12. Verbal abuse during labour, or childbirth or postpartum period
13. Basic hygiene provision
14. Basic sanitation for women and their family
Core paediatric and young adolescent QoC indicators mapped to the QoC standards (16)
1. Institutional child mortality rate (disaggregated by cause)
2. In-hospital paediatric case fatality rate by common paediatric conditions
3. Assessment for the sick children < 5 years old based on the integrated management of
newborn and childhood Illness criteria
4. Treatment of possible severe bacterial infection at outpatient level
5. Kangaroo mother care for newborns weighing 2000 g or less
6. Pneumonia treatment with 1st choice antibiotic for children aged between 7 days and
5 years
7. Management of acute watery diarrhoea among children <5 years old
8. Children and young adolescents with documented malaria test results
9. Treatment of uncomplicated severe acute malnutrition
14 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Core paediatric and young adolescent QoC indicators mapped to the QoC standards (16)
10. Management of anaemia in children and young adolescents
11. Children < 2 years of age with known HIV status for either the mother and/or the child
12. Tuberculosis (TB) evaluation for children and young adolescents with presumptive TB
13. Catch up immunization for children < 2 years old
14. Inappropriate use of antibiotics for cough or cold in children and young adolescents
15. Completion of medical documentation for children and young adolescents
16. Quality of care data reviews for children and young adolescents
17. Knowledge and understanding of the condition and treatment plan among children and
young adolescents or their caregivers
18. Satisfaction with decision-making process for care
19. Pre-discharge counselling on danger signs and feeding for children < 5 years old
20. Awareness of child rights during health care
21. Disrespectful care for the child or caregiver
22. Accompaniment during care
23. Access to play and educational material during hospitalization
24. Clinical mentorship or training for childcare providers
25. Stock out of essential child health medicines
Development of WHO recommended core QoC indicators for antenatal care, postnatal
care, care of small and sick newborns, care of women with obstetric complications, care
of adolescents, and care of ageing adults, is still ongoing; thus, these indicators are not
included in this guide.
Selecting quality of
care indicators to
monitor and guide
improvement
3 3. Selecting quality of care indicators
Selecting
quality of care
to monitor and guide improvement
indicators
Many programmes seeking to improve QoC monitor four types of QoC indicators –
sometimes called a ‘family of indicators’ – drawing on the well-established Donabedian
framework (10,11). See chapter 2 for a definition of the first three types of indicators (i.e.
input, process and outcome indicators) based on the Donabedian framework. A fourth
18 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
type, ‘balancing indicators’, measures potential unintended consequences in one part
of a system associated with improving care in another part of the system. Balancing
indicators can be input, process or outcome indicators. Table 3 provides an example of
3
Selecting
QoC indicators, categorized by indicator type, for an illustrative paediatric health care quality of care
improvement initiative. indicators
Measures
How will we know that a change is an improvement?
Changes
What changes can we make that will result in improvement?
Act Plan
Study Do
Source: (18).
Box 2 provides examples of strong and weak improvement aim statements for selected
quality domains, based on the first question of the model (i.e. What are we trying to
accomplish?). Improvement aim statements should be specific, measurable, attainable,
relevant and time-bound. A strong improvement aim answers the questions: a) What?
The process or outcome to improve; b) Who? The patients affected and/or involved health
workers; c) How much? The magnitude/size of the change you hope to achieve; and d)
By when? The timeframe for improvement.
20 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
3.4 Practical guidance
Although there is no one-size-fits-all approach to select
Illustrative
actors
3
Selecting
QoC indicators for use in a QI initiative to improve MNCAH quality of care
care, the considerations below are an important starting A blended team of indicators
point. The development of well-defined QoC indicators programme managers,
and associated measurement methods and data sources QI stakeholders
should be supported by managers and stakeholders with and health workers
a combination of subject matter (e.g. clinical), QI and representing a range
of technical expertise
measurement expertise.
(e.g. QI focal point,
HIS officers, MNCAH
3.4.1 Identify potential (candidate) QoC programme managers,
indicators clinical providers etc.).
Important considerations and criteria for selection of strong QoC indicators are listed
below. Sometimes it may not be possible to satisfy all such criteria. However, over time,
as managers and stakeholders acquire increasing expertise and experience, they will
usually find it easier to meet these indicator selection criteria.
• Relevance to the QI aim: The indicator must be closely aligned with the specific QI aim,
ensuring it measures what is intended to be improved.
• Measurability: The indicator must be measurable with clearly defined metrics or
standards, allowing for objective assessment of improvement (or decline). Data
collection for the indicator should be feasible within the health care setting using
available resources and technology.
• Validity: The indicator should validly reflect the QoC construct it is intended to measure.
To the extent possible, it must be based on established evidence or best practices in
health care.
• Reliability: In the context of a multi-site QI initiative the indicator should consistently
produce reliable data across different settings, providers and patient populations. The
results should also be reproducible under similar conditions.
• Sensitivity to change: The indicator should be sensitive to changes in care processes
or outcomes, allowing organizations to detect improvements (or declines) associated
with interventions.
• Actionability: The indicator should provide actionable information that helps health
workers improve care. Data for the indicator should be available promptly enough to
drive real-time improvements.
• Patient-centred: Indicators of patient-centred care should focus on aspects of care that
matter most to patients, aligning with patient values, priorities and needs.
• Comparability: In a multi-site QI initiative, the indicator should use standardized
definitions to facilitate comparison and benchmarking across sites and to motivate
improvement and friendly competition.
• Equity-focused: Where relevant, the indicator should account for variations in care
among different patient groups and highlight disparities in quality of care (including
care processes and outcomes). Often QoC indicators can be disaggregated by equity
‘stratifiers’ to identify populations receiving poorer quality of care, and to inform and
monitor interventions to reduce disparities in quality of care.
• Feasibility of collection and reporting: The process of collecting, analysing and reporting
on the indicator should not place an undue burden on health care providers. The
indicator should be straightforward to collect without requiring complex or expensive
infrastructure.
22 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
3.4.3 Balance indicator types
Once a list of candidate indicators has been established, determine the appropriate
3
balance between process indicators (e.g. adherence to clinical protocols) and outcome Selecting
quality of care
indicators (e.g. change in health status) needed to track progress toward and guide indicators
iterative changes to achieve the improvement aim (see Chapter 2 on indicator types). Also
consider the inclusion of input indicators, as appropriate, and the inclusion of balancing
indicator(s) (see Table 3). A balancing QoC indicator typically measures a process or
outcome that is not targeted in a QI initiative, to verify that changes to improve a specific
health care process do not cause an unintended deterioration or worsening of other care
processes and outcomes that are not the focus of the improvement effort. It is important
to ensure that stakeholders provide input on the balance of indicators they believe can
best measure and guide progress toward the QI aim and identify unintended worsening
of quality of care in areas that are not a focus of the QI initiative.
3.4.6 Defining data sources for QoC indicators that do not exist in
the HMIS and considerations for their incorporation
In some instances, QoC indicators that are important to measure temporarily during a
time-limited QI initiative may not be appropriate to monitor over the longer term in a
In some instances, QoC indicators selected for monitoring in a time-limited QI initiative will
be highly relevant for incorporation into the HIS for sustained monitoring for the purpose
of QP, QI and QC. For example, in some settings the cause of a maternal, newborn or child
death is not captured as a standardized data element in the national HIS. However, this
is vital information to inform QP, QI and QC processes. Chapter 4 reviews considerations
for incorporating QoC indicators into a HIS.
24 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Table 4. Example of essential information to be defined for each selected QoC indicator
Measurement Responsible
Indicator Indicator definition Numerator Denominator method Frequency person(s)
Birth companion of % of women who wanted and # of women who wanted and # of recently delivered Exit survey Monthly Maternity QI
choice had a companion of choice had a companion of choice women who completed team
supporting them during supporting them during exit survey
childbirth in the health facility childbirth in the health
facility
Immediate % of women who gave birth in # of women who gave birth # of women who gave Facility labour Monthly Maternity QI
administration of a a health facility who received in a health facility who birth in the facility and delivery team
uterotonic after birth a prophylactic uterotonic received a prophylactic register
immediately after birth for uterotonic immediately after
prevention of PPH birth for prevention of PPH
Contraceptive % of adolescent clients # of adolescent clients # of adolescent clients Review of facility Monthly Family planning
counselling and (15–19 years) counselled (15–19 years) who were (15–19 years) attending registers and nurse or
offer of a modern and offered a modern counselled and offered the health facility during client records adolescent
contraceptive method contraceptive method during a modern contraceptive the reporting period health QI team
to adolescents their health facility visit method during their visit
c. Less is more
The primary objective of measurement in QI efforts is for learning and improving care. It
is important to strike the balance between “ideal” versus “good enough” QoC indicators.
There is no hard science on the number of indicators needed but often 4–6 carefully
selected QoC indicators are sufficient for monitoring and informing interventions to
achieve a single improvement aim. This is usually preferable to collecting ‘nice to know’
data that will increase the measurement burden for front-line teams and are not likely
to be used or to add much value for QI teams and managers supporting these teams.
26 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
3
Fig. 8. Pyramid of data collection and use
Fig. 8. Pyramid of data collection and use
Selecting
Global quality of care
level
indicators
National level
Subnational level
(region/province/district, etc.)
Table 5 outlines key actions and considerations at the national, subnational and facility
levels to support the selection of meaningful indicators for the purpose of improving care.
In each state, the State Ministry of Health convened a consultation with key
stakeholders, including local government area and facility representatives, to
prioritize areas for improvement (aims) based on the WHO standards for improving
quality of maternal and newborn care published in 2016 (15).
To inform selection of QoC indicators that were feasible for monitoring in the
programme setting, the State Ministry of Health health information officers mapped
existing QoC data elements in the standardized facility labour and delivery and
postnatal care registers, and facility monthly health information reporting forms.
MNH programme managers, health information officers and QI experts worked
together to select a small number of QoC indicators (process and outcome) based
on existing data availability and the processes and outcomes prioritized to achieve
the two programme aims.
Although most selected QoC indicators could be measured via existing data in facility
registers, stakeholders elected to measure a small number of QoC indicators that
they considered vitally important but for which there was no existing data. They
recommended that these indicators be monitored by adding a column to the labour
and delivery and postnatal care register. These indicators are noted (*) below.
28 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Childbirth and early postnatal care for pregnant women 3
Improvement aim (tailored in each facility to include a target based on baseline Selecting
performance): Improve adherence with evidence-based best practices for routine labour quality of care
and delivery and early postnatal care from X% (baseline) to Y% within 10 months among indicators
pregnant women giving birth in 91 facilities in Ebonyi and Kogi states (this improvement
aim was selected to contribute to a broader set of interventions to reduce preventable
maternal complications and stillbirths)
Priority clinical processes Quality of care indicators
(indicators in italics required addition of a
column in the facility register)
Labour and delivery care Process indicators
• Document fetal heart rate (FHR) on • % women with fetal heart rate
admission to maternity documented on admission*
• Document blood pressure of pregnant • % women with blood pressure
woman on admission to maternity (early documented*
detection of pre-eclampsia) • % deliveries monitored with partograph*
• Monitor progress of labour and maternal/ • % women delivered with companion of
fetal well-being during labour to choice*
guide care and detect early signs of
• % women receiving prophylactic
complications
uterotonic in third stage of labour
• Facilitate presence of birth companion of
choice if desired Health outcome indicator
• Administer uterotonic immediately after • Stillbirth rate (intrapartum)
birth to prevent PPH
Note: BCG: Bacillus Calmette-Guérin vaccine for TB; IPV: inactivated polio vaccine; OPV: oral
polio vaccine; PPH: postpartum haemorrhage.
Assessing and
strengthening health
information systems to
measure and monitor
prioritized quality of care
indicators
4 4. Assessing and strengthening health
Assessing and
strengthening
information systems to measure and
monitor prioritized quality of care
health
information
systems
indicators
32 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
This chapter reviews the different components of a comprehensive HIS, including the data
sources for monitoring and improving QoC across system levels for women, newborns,
children and adolescents. The chapter provides practical guidance on how stakeholders
4
Assessing and
can assess the availability of data for selected QI indicators in the existing HIS. Finally,
strengthening
the chapter reviews considerations and key actions for identifying and incorporating QoC health
indicators and data elements into the HIS for long-term monitoring (beyond the life of a information
specific QI initiative). systems
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 33
4 • Health facility data recording and recording forms: In this guide, used
to refer to case notes, individual patient records, individual reporting forms
Assessing and (e.g. surveillance forms) and home-based records, or their equivalents, which
strengthening
health
can be used for patient care to collect or record patient-level information in
information a RHIS, whether paper-based or digital. They also include health facility
systems registers, which consist of a list or file containing uniform information about
individuals, collected in a systematic and comprehensive way, in order to serve
a predetermined purpose (26). Registers are typically used to collect or record
socioeconomic and demographic information from the client or patient; their
clinical history and diagnosis; as well as treatment/care plan and outcomes. The
health facility register can be used as the basis for tracking individual patient
health care processes and outcomes, especially when they exist as electronic
medical records. Information from health facility registers is often summarized
and sometimes tallied across other health facility registers to create summary
registers or tally sheets or reporting forms, for the purpose of reporting
aggregated patient/client data. Patient registries, on the other hand, are
a collection of information about individuals, usually focused on a specific
diagnosis or conditions. Registry data is stored in a database and can provide
health care providers and researchers with first-hand information about people
with specific conditions, both individually and as a group, and over time. Patient
registries are different from health facility registers in that the former collect
patient information that is disease- or condition-specific whereas the latter
collect information about the patient regardless of their condition or disease (27).
• Data element. The smallest named item of data that has a unique meaning and
can assume a distinct value (28). With respect to an HMIS, data elements might
include client name, gender/sex, diagnosis, etc. Data elements are associated
with “data types that define their form”. These can include simple data types
such as date, time, numeric value, or complex data types, such as addresses.
• Data. A collection of data elements, that convey specific information (e.g. %
of women who received antenatal care services during pregnancy in facility x).
Data may include any form of text, sound, visual or audio-visual recording (29).
• Data point. A piece of data representing one observation taken at a given point
in time (e.g. one cell in a data table showing antenatal care coverage in facility
x in a given month)
• Metadata. A structured reference data set that provides information about
other data (e.g. numerator and denominator of a specific indicator, or indicator
rationale, or indicator data source). They are the information needed to explain
and understand the data or values being presented (30).
• Dataset: A structured collection of related data on a given subject usually
presented in a data table, where rows typically represent individual records
or observations, and columns represent variables or features related to those
observations (ISO, 2014).
• Database. Structured system that allows data to be easily stored, accessed,
manipulated and updated (31).
• Information. Classified, organized and/or processed data that has some
meaningful value for the user (usually a result of data analysis) (32).
34 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
4.3.2 Components of a typical HIS with MNCAH data
Fig. 9 shows an illustrative example of an HIS and its subsidiary data systems that
4
constitute important data sources for QoC measurement and monitoring for the purpose Assessing and
strengthening
of improving MNCAH care. Stakeholders, including QI teams at national, subnational and health
service delivery levels, can use a variety of data sources to identify and examine QoC information
problems, define improvement aims and monitor QoC over time as they make changes. systems
The WHO framework and standards for country HIS (33) provides a detailed explanation
of some of these data sources.
Table 6 provides an illustrative example of the types of HIS data sources that a QI team
could use to plan and monitor a QI initiative to improve care for diarrhoea in children in
a setting of high diarrhoea mortality.
Table 6. Example of data sources from various components of the HIS to monitor
quality of care for children with diarrhoea
Improvement Indicator
aim classification Indicator name Data source in the HIS
Reduce • Input • Stockouts of oral • Logistics management
morbidity and indicators rehydration solution/zinc information system and
mortality from inventory monitoring
diarrhoea in sheet
children under
5 years old • % health workers trained in • Human resources
diagnosis and management information system and
of diarrhoea training registers
• Process • % children with diarrhoea • HMIS data (e.g. case notes,
indicators treated with oral rehydration health facility registers)
solution and zinc
• % children with diarrhoea
assessed for severe
dehydration
• Outcome • Institutional case fatality • HMIS data (e.g. case notes,
indicator rate related to diarrhoea health facility registers, or
civil registration and vital
statistics system)
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 35
4 Fig. 9. An illustrative example of a typical HIS with different subsidiary
Fig. 9. An illustrative example of a typical HIS with different subsidiary data systems that can act as potential data
data systems
sources to supportthat
QI, QPcan
and act
QC as potential data sources to support quality
improvement, quality planning and quality control
Assessing and
strengthening Nationallevel
health
information National aggregated data (e.g. in DHIS2)
systems
Subnationallevel
Patient/client
registries
Summary data for the districts/counties/
or equivalent (e.g. in DHIS2)
Health facility registry
(geospatial info. systems)
Aggregated data
Clinical trial registry (summary forms, aggregated
data in DHIS2, etc) Client- and caregiver- reported Providers'
health outcomes and experience
experience of care (cellphone of care
interviews, client surveys, etc)
Paediatric death
audits (PDA) Patient/client
Home-based
Demographic health
survey
Village clinic
Multiple indicator
cluster survey
Patient/client
Community score card
for quality of care
accountability
Household
Community sentinel
surveillance systems
Community health
information system
36 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
4.4 Practical guidance
Different actions are required at national, subnational
Illustrative
actors
4
Assessing and
and health facility levels to assess the readiness of the strengthening
local HIS to provide data needed to monitor priority QoC health
A team of HIS and/or
indicators at each level. QI stakeholders who are setting up M&E managers and/ information
new QI initiatives at each level will need to track a set of systems
or technicians (or their
priority QoC indicators that may vary by number, purpose equivalent) at national
of measurement, and the need for their institutionalization or subnational level
in the local HIS. The guidance in this chapter is therefore depending on local
policies and governance
organized to account for these peculiarities across
structures.
the health system hierarchy. However, there are some
key principles that should underpin the process of HIS
assessment and adaptation for QoC monitoring.
Leveraging the existing HIS to support QoC monitoring activities is usually a more
sustainable solution to improving the availability of QoC data than introducing short-lived
parallel data systems. This is true for QoC indicators prioritized for long-term monitoring
by national and subnational actors, as well as the indicators that are specific to a short-
term QI initiative that does not require that the indicators be institutionalized in the local
HIS for long-term monitoring.
Strengthening the existing data system can also decrease the reporting burden for health
facility teams and minimize data redundancies and reporting inefficiencies.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 37
4 Ensuring that there are clearly defined metadata for all priority QoC indicators is necessary
before conducting a HIS landscape analysis. This step is essential to ensure the primary
data recording, indicator calculation, reporting and analysis are standard across levels
Assessing and
(e.g. health facilities, districts, etc).
strengthening
health
information
For some priority QoC indicators, especially those already established in the existing M&E
systems framework for regular collection and reporting, the metadata may already exist although
it may be necessary to review and align them with emerging MNCH QoC measurement
guidance and frameworks at the global level.
Indicators may have more than 30 metadata fields. This guide recommends completion
of a minimum set of 10 metadata fields for each QoC indicator prioritized for national or
subnational level monitoring. These fields are explained in Table 7 and one illustrative
indicator (i.e. pre-discharge neonatal mortality rate by cause) is used as a running example
across the metadata fields.
38 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Field name
User and
Definition and rationale
• Once the technical area is defined, it is recommended to establish
4
purpose of use consensus on who will use the data generated by this indicator and Assessing and
to what end. If the user and the purpose of using the indicator cannot strengthening
be determined, the inclusion of the indicator in the HIS landscape health
assessment would not be justified. information
• In the example, maternity health workers and members of the maternity systems
QI team would use the indicator to understand leading causes of
newborn deaths in their facility to inform planning and monitoring of a
QI initiative to reduce newborn deaths.
Disaggregation • This defines how the indicator can be disaggregated according to
specific profiles that may be of interest to different stakeholders, (i.e. by
geographical location, age group, sex, rural/urban, among others).
• In the example, the indicator data can be disaggregated by small and/ or
sick newborns, or by the level of facility care (i.e. level 2 or 3).
Numerator • The numerator is usually the actual number of people that experience
an event or items/objects that exhibit a particular trait or characteristics
in a specified population and period.
• In the example, the numerator is: # neonates up to 28 days of completed
life who were born live in a facility and died from specific causes prior to
discharge from the facility (excluding re-admissions for illness).
Denominator • The numerator is the total number of the population or items/objects of
interest from which the numerator was drawn.
• In the example, the denominator is: Total # live births
Individual data • This is where the individual data elements of the numerator and
elements denominator are specified. This is critical for comprehensive mapping,
which requires the examination of individual data elements and their
availability in different data sources, and to identify opportunities for
HIS adaptation to include missing or partially available indicators.
• In the example indicator, the individual data elements are:
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to prematurity
(excluding re-admissions for illness).
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to sepsis
(excluding re-admissions for illness).
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to asphyxia
(excluding re-admissions for illness).
– # neonates up to 27 days of completed life who were born live in a
facility and died prior to discharge from the facility, due to other causes
(excluding re-admissions for illness).
– Total # live births.
Frequency of • This refers to how often data is reported to track the progress and
data reporting performance of time-bound QI initiatives or for routine monitoring for
QC purposes. The reporting frequency will depend on several factors,
including the purpose of measuring the indicator (e.g. QI or QC), the
urgency of the indicator data for making decisions, the available
resources, and the data source (i.e. RHIS, periodic survey, etc).
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 39
4 c. Develop, adopt or adapt an HIS landscape assessment tool
Once the metadata fields for all priority indicators have been completed, a standardized
Assessing and instrument can be used to collect different pieces of information from the local HIS. This
strengthening helps determine whether provision has already been made for these indicators or their
health
information constituent data elements to be collected, and whether they are already being collected
systems and reported, routinely or periodically.
• An indicator and all its data elements are available, as defined, in the local HIS (i.e.
provision has been made in the HIS for these indicators and their metadata to be
collected either routinely or periodically, using the same or different data sources
(see Fig. 9).
• An indicator with a name similar to what is being assessed already exists in the local
HIS, but the constituent data elements are different by definition. This is possible, for
example, where indicators recommended globally are being adopted for use at country
level but the standard of care they seek to measure has not been fully adapted to the
local context. For example, early initiation of breastfeeding can be defined as the %
babies born alive in a facility who are breastfed within one hour (60 minutes) of birth, but
a country that uses 90 minutes as the standard for timing of breastfeeding may have
the same indicator name in the local HIS but defined slightly differently as % babies
born alive in a facility who are breastfed within 90 minutes of birth.
• An indicator for which only a portion of the constituent data elements can be found
in the local HIS. Numerators are typically more difficult to find than denominators,
especially when the former consists of more than one data element. This is often the
case for process indicators that tend to measure a process of care involving more than
one activity or care pathway.
• The indicator being assessed is completely new and neither the numerator nor
denominator – including all individual data elements – exist anywhere in the local
HIS. This is typical for indicators that have been recently introduced as part of new
QI initiatives or indicators developed for emerging QoC measurement areas, such as
experience of care indicators.
This being the case, the starting point should be to check if there is already an HIS
landscape assessment tool that has been developed locally for the same or similar
purpose. There are also some programme-oriented tools that have been developed
globally such as the Every newborn-measurement improvement for newborn and stillbirth
indicators (EN-MINI) tools for routine health information systems (34), and the Maternal,
newborn, child and adolescent health RHIS country mapping template (35).
This guide describes how to use a new HIS landscape assessment tool developed by WHO
specifically to map QoC indicators and their constituent data elements in the local HIS
using either routine or non-routine data sources. This tool is entitled: Health information
40 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
system landscape assessment (HISLA): a tool for assessing the feasibility of collecting,
reporting, and using quality of care indicators, or the ‘HISLA tool’ for short. As described
in section 1.2.2, the HISLA tool was iteratively tested and refined in five countries. It is
4
presented in the HISLA tool, which also contains some instructions on how to use it. In Assessing and
strengthening
summary, the tool consists of four sections: health
information
1. Introduction: outlining the purpose of the HISLA tool and its components. systems
2. User instructions: providing step by step instructions on how to use the tool
3. Indicator mapping form: to be used to map each priority QoC indicator against the
local HIS based on a suite of metadata.
4. HIS tools and linkage form: an adaptable form that can help capture information about
different data recording, reporting and visualization tools being used for a specific
technical area (e.g. child health).
The country example included at the end of this chapter provides an example of the
analytical output that was generated from the HISLA tool based on the assessment
conducted in Uganda.
• A desk review is recommended when there are limited resources to support site visits
to health facilities and the local HIS is well developed. In such settings, all or at least
most health facilities in the country will be using the same standardized data recording
and reporting tools, and the health facilities are reporting aggregated data using the
same platforms and procedure. A desk review will also usually require the most up to
date indicator reference sheets and protocols describing which indicators should be
collected in different types of health facilities. In the absence of such information, a
desk review could yield inaccurate information to inform HIS adaptation.
• Site visits to health facilities have several benefits relative to desk reviews. Visiting
health facilities offers an opportunity to better understand: how services are organized
for the technical areas of interest (e.g. maternal health) and at different service
levels; whether the organization of services mirrors the flow of health data; how data
recording/collection, aggregation and upward reporting are done; the format of the
tools used for these operations and any differences across health facilities; and an
initial glance at the possibility of measuring, reporting and using the MNCAH QoC
indicators of interest. Therefore, if resources are available, the most objective approach
to assessing the availability of QoC indicators and their metadata in the local HIS is
through site visits.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 41
4 settings. Furthermore, it is common for health facilities operating at different levels
of the health system (i.e. primary, secondary, tertiary health facilities), or health
sector (i.e. private and public health facilities) to collect and report data for different
Assessing and indicators depending on the scope of services they offer. For example:
strengthening
health – Health centres will usually not collect data on indicators related to inpatient care
information as they often do not offer inpatient services. Similarly, such services have fewer
systems service points compared to their higher-level counterparts and correspondingly,
they use fewer health facility registers and collect the least amount of data related
to MNCAH QoC.
– Also, depending on how the services are organized, some health facilities are not
mandated to report mortality data as they experience fewer deaths and/or refer
critically ill patients to health facilities at a higher level.
– Health facilities in the private sector may also collect specific data not collected by
their public sector counterparts or use different reporting mechanisms. Local HIS
are typically designed using a tiered approach to data collection and reporting.
Having a good mix of health facilities helps with understanding which data are collected
and reported by which type of health facility, and whether the tools being used are the
same/standardized.
• Consider informing health facility managers about planned site visits. The
modalities for this process will vary depending on the local protocols and procedures.
Sometimes, when the assessment is organized and/or supported directly by the
national ministry of health, direct communication to subnational authorities informing
them about the planned activities and the criteria for health facilities to be assessed
will trigger further communication to the leadership of the concerned health facilities
requesting permission for the activities to be carried out. The goal is to ensure that all
the leaders and managers at the relevant levels of the health system are aware of the
scope and nature of the HIS landscape assessment and are in support.
• Select and orient a team of assessors. The team of assessors can consist of HIS,
M&E and programme focal points representing the ministry of health. Where possible,
representatives from partner organizations can be invited to support the assessment.
The goal is to ensure that the team has a good balance of technical expertise related to
the programme of interest and HIS/M&E for complementarity during the assessment.
– Several teams may be required to carry out the assessment depending on the
number of health facilities to be visited and the distance between them. However,
each team should consist of no more than six people so that on the day of the
assessment, the movement in the health facility from one service point to the
next is less disruptive.
– All the members of the team should be oriented on the MNCAH QoC indicators to
be assessed, the methodology and purpose of the assessment, and how the site
visits are to be conducted. It is important for each team member to be familiar
with the indicators being assessed prior to the site visits as this knowledge will
help them ask the right questions and visit the most relevant service points in
the health facility. For example, knowing that there is an indicator that measures
access to play and educational material during hospitalization for children prior
to the health facility visit will prompt team members to ask if and how such an
indicator is measured and how the data is recorded and reported, if collected. A
desk review might therefore be an important step of the preparatory activities.
42 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Conduct site visits
Ideally, the assessment should start with a courtesy visit to the management of the
4
sampled health facility being visited. Assessing and
strengthening
• Here, the manager can be briefed on the purpose of the visit and the process that will health
be followed, including the expected level of service disruption and duration. information
systems
• The manager can be requested to assign a staff member, as available, to take the team
of assessors through the various service points that patients seen at the health facility
come into contact with until they have been discharged, either dead or alive/cured.
This includes both inpatient and outpatient services.
As described earlier, this process offers an opportunity to examine which data are
collected where, how, and using which tools. Box 4 provides a summary of a generic
process that can be followed to conduct health facility assessments using child health
services as an example.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 43
4 When moving from one service point to the next, it is important to write down some
notes regarding the services offered in that health facility, key observations made, and
the names of the data collection and reporting tools used. This information can be used
Assessing and
strengthening during a debriefing meeting which the teams should attempt to have before the actual
health indicator mapping process begins. The debriefing meeting allows teams to compare notes
information and determine whether there are any differences in observations that must be considered
systems when indicators and their data elements are eventually mapped. A sample template for
field observations is provided in Table 8.
44 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Analyse the HIS landscape assessment data
Summary results can be organized to show two important categories of indicators:
4
Assessing and
• Indicators immediately available for adoption and use: These include indicators strengthening
found to already exist in the HIS as defined, or those for which 100% of the data health
elements required to calculate them are provided for in the HIS. Therefore, zero to information
systems
minimal effort would be required to start using them in the short term to drive QI. For
example, an indicator may not be currently reported in the HMIS as a percentage or
rate, but if the data elements to calculate this indicator as a rate are routinely collected,
efforts to adopt the indicator in the immediate term might include extracting data
directly from different (or the same) sources to calculate the indicator as a rate.
• Indicators requiring extra effort to adopt and use: These would be indicators whose
data elements could not be found in the HIS (either in part or as a whole) and requiring
considerable effort to integrate in the HIS.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 45
4 Another major initiative undertaken has been the revision and streamlining of
health facility registers. Previously, many registers used at health facilities were
Assessing and redundant or did not capture essential data. By revising these registers and
strengthening reducing their number, the Kenya Ministry of Health has been able to streamline
health data collection and ensure that only the most relevant data are captured. This
information
revision process not only improves the quality of the data collected but also
systems
reduces the administrative burden on health workers.
Kenya has also prioritized data harmonization through a series of workshops
designed to align health programme indicators and standardize the forms used
for data collection. These workshops bring together stakeholders from various
health programmes, ensuring that indicators are fully aligned, which promotes
data consistency and reduces redundancies. This alignment is critical for making
national-level reporting more efficient and improving the overall quality of health
data in the country.
A key component of Kenya’s health data standardization efforts is the periodic review
of the Health sector indicator manual. This review process is triggered by changes
in health policies, the National Health Sector Strategic Plan, or new data needs
identified by health programmes. The HIS Unit in the Ministry of Health is responsible
for initiating this review process by inviting various health departments, divisions
and units to submit proposals for new or redefined health indicators. Once these
proposals are received, a technical working group is convened to review, refine and
revise the indicators. The updated indicators are then incorporated into the manual.
Following the revision of the indicators, Kenya moved to the design and review of
data collection tools. A task force is convened to design the registers and reporting
tools based on the periodically updated manual, ensuring that the tools capture
all relevant data elements and are user-friendly. To further support health workers
in using these tools, instruction manuals are created to guide them through the
data collection process. Before these tools are rolled out nationwide, they undergo
pre-testing in selected settings to ensure they are practical and user-friendly in
real-world contexts. Feedback from these pre-tests is then used to make final
adjustments to the tools before their full national implementation.
Once finalized, the revised tools are printed and distributed to health facilities
across the country. To ensure a smooth transition, health workers are trained on
the use of the new tools, and a clear transition plan is developed and disseminated.
This plan helps guide health workers as they switch from the old data collection
systems to the new standardized ones.
The tools are then customized into DHIS2 to allow for seamless data entry and
reporting. This ensures that all the health data collected at facility and sub-national
levels can be efficiently integrated into the national HIS. By integrating these revised
tools into DHIS2, Kenya has improved the timeliness and accuracy of health data
reporting, which has strengthened decision-making at all levels of the health system.
The process of reviewing and updating Kenya’s HIS tools is not a one-time effort but
is instead part of an ongoing process. Every two years, the health sector indicator
manual and associated data collection tools undergo further reviews as guided by
the country’s Health Information Policy. These periodic reviews ensure that the
health data collection systems remain aligned with emerging health priorities,
policy shifts and changing data needs.
Source: (36).
46 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Technical considerations
The responsible team should explore how the HIS infrastructure, design and
interoperability need to evolve to accommodate new data points and functions. Key
4
Assessing and
actions may include the following. strengthening
health
• Ensure that the HIS can be scaled up effectively to handle new indicators. This may information
require enhancing its capacity for data recording, indicator calculation, reporting and systems
processing. This includes verifying, for example, that HMIS tools, such as case notes and
facility registers, can accommodate the additional data volume without compromising
the integrity of the tool. Adding missing indicators or metadata to the HMIS may involve
adding new columns to existing tables or creating entirely new tables, ensuring that
each field is assigned the appropriate data types and lengths.
• Identify any performance bottlenecks in the HIS that could be exacerbated by adding
new indicators, such as low-quality recording of patient information or delays in
reporting summary data.
• Implement version control of adapted tools to ensure that changes are well-
documented and reversible if needed.
• Align new indicators with existing data standards, such as the international classification
of diseases (ICD–11) codes, to ensure consistency and comparability across datasets.
• Develop validation rules to enforce correct data entry for the new indicators. This
could include defining acceptable value ranges, required fields or validation checks
for certain data types (e.g. numeric vs text).
• Ensure that the new indicators and their data elements are integrated in a way that
maintains compatibility with other systems, such as electronic medical records,
laboratory information systems, supply chain management systems, or national health
databases. This includes ensuring consistent data formats and reporting protocols.
• If data related to the new indicators needs to be transferred between different systems
(e.g. HMIS summary/reporting forms to DHIS2), data mapping and transformation
rules must be developed. This ensures that data is accurately transferred and correctly
understood across systems.
• In cases where data for new indicators are derived from multiple data sources and the
calculation of these indicators is rather complex, consider automating the calculations
(to minimize errors.
Operational considerations
Operational considerations focus on the practical implementation of an effort to introduce
new indicators in the HIS.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 47
4 onsite support visits by QI teams operating at national or subnational levels, and
troubleshooting technical issues or providing clarification on indicator definitions.
These programmes can be a good opportunity to reinforce the skills learned during
Assessing and
strengthening training and provide ongoing guidance, particularly in the early stages of adapting to
health the new indicators.
information • Insights from the HIS landscape assessment related to data collection, entry and
systems
reporting workflows can help identify where the new indicators can be integrated
smoothly and where adjustments are needed. Where necessary, workflows can be
adapted to incorporate new indicators without disrupting existing data management
processes.
• Consider implementing the new indicators in phases, starting with a pilot phase
in select facilities or regions. This allows you to test the changes in a controlled
environment and make necessary adjustments before rolling them out system-wide.
• Ensure that the introduction of new MNCAH QoC indicators is coordinated with other
linked vertical health programmes (e.g. HIV, TB, immunization) to avoid overburdening
health workers with multiple reporting requirements. Where possible, streamline data
collection across programmes.
• If the local HIS uses electronic health records or mobile health systems for collecting
non-routine data such as patient experiences of care, consider updating the application
to include fields for the new indicators. This may also involve modifying data entry
interfaces to ensure they remain user-friendly and efficient.
• Be mindful of the workload impact of the newly introduced indicators on frontline
health workers. If the new indicators add a significant data collection burden, simplify
where possible, or provide additional support, such as data clerks or digital tools to
streamline the process.
• Consider the reporting frequency of any new indicators (e.g. monthly, quarterly). The
frequency should be feasible given the resources available and the other data reporting
requirements already in place. For example, data collected through exit interview may
not be feasible to collect every month and consequently, the reporting frequency may
be quarterly at a minimum.
• Integrate new indicators into existing reporting cycles to avoid creating additional
timelines or deadlines that could confuse or overburden staff.
• Ensure that the data generated from the new indicators can be accessed at least in
near-real-time by relevant QI stakeholders and decision-makers.
• Consider creating feedback loops so that health facilities and data collectors receive
regular updates on how the data they are collecting for new indicators is being used.
This helps to foster a culture of data-driven decision-making and motivates continued
high-quality data collection.
• The initial HIS adaptation for newly developed indicators is a great opportunity to
ensure that the system remains adaptable to future changes in QI priorities or indicators.
Therefore, consider establishing processes and protocols for regularly reviewing and
updating the HIS as new QI priorities or QoC measurement requirements emerge.
Strategic considerations
Strategic considerations are critical for ensuring that the integration of new indicators
aligns with broader HIS goals, policies and long-term sustainability.
48 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
• In many LMICs, HIS adaptations – also known as HIS reviews – are carried out in three-
to five-year cycles. Where the cycles are infrequent, the integration process might not
be timely to support QI programmes. It may therefore be important to adjust QI plans
4
Assessing and
and strategies at national and subnational level to include prioritized indicators that strengthening
are only feasible to measure and monitor immediately, and then include the missing health
indicators in the next HIS review cycle. information
systems
• Consider the long-term financial implications of adding new indicators. This includes not
only the costs of initial implementation (e.g. HMIS adaptation, training) but also ongoing
costs related to data collection, system maintenance, staff support and related upgrades.
Identifying sustainable funding sources, whether through government budgets, donor
support, or partnerships with nongovernmental actors, is essential to avoid disruptions.
• For sustainability, the new indicators should be prioritized for long-term monitoring
and fully integrated into the national HIS architecture rather than in a temporary or
parallel system. This ensures they become part of routine data collection and reporting
to inform ongoing improvement work at national and subnational levels.
• Consider setting up collaborations across different health programmes and
departments to prevent siloed data systems. For example, coordination between the
immunization, nutrition, early child health development, HIV and TB programmes with
the child health programme ensures that new QoC indicators are complementary and
do not duplicate efforts.
• When adding new indicators, ensure the collection, reporting and use are also
in compliance with national data privacy regulations, such as laws on patient
confidentiality, data security, and informed consent. Where prioritized indicators have
ethical implications, data collection processes should be designed to protect patient
privacy and dignity, and appropriate consent should be obtained before collecting
personal health data.
• Consider making sure that the new indicators are integrated in a way that allows for
data collection and disaggregation to include different population groups (e.g. gender,
age, socioeconomic status, geographic location) to help ensure equity in health service
delivery and outcomes.
• If time and resources permit, consider incorporating an iterative process where the
performance of the new indicators is periodically reviewed, and adjustments are made
based on the lessons learned. This could involve modifying indicator definitions,
improving data collection processes, or updating training and support for HIS users.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 49
4 In resource-limited settings, adding extensive sets of indicators to the local HIS for routine
measurement and monitoring can be costly and burdensome, especially if there is no
plan for their long-term use.
Assessing and
strengthening This section offers practical guidance on how to assess and leverage the existing HIS to
health
information improve the availability of QoC data for supporting health facility-based QI initiatives.
systems This process is relatively less complex and resource-intensive compared to what has been
described for national and subnational level activities.
50 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Table 9. Summary of key actions for national, subnational and facility
levels 4
Subnational Facility (including Assessing and
National (regional/district) primary care levels) strengthening
health
• Promote and support the • Support health facilities • Assess the data information
use of the national HIS by QI to select MNCAH QoC collection, recording and systems
managers across health system indicators in line with reporting forms available
levels to assess the availability health facility-specific or in the health facility to
of the data they need for multi-site subnational QI identify opportunities
their QI initiatives. This will initiatives for measuring and
minimize efforts to resort to • Support health facilities monitoring indicators
parallel systems to complete relevant prioritized as part of the
• Provide governance metadata fields for their QI effort at health facility
and leadership to help priority MNCAH QoC level
strengthen the national HIS indicators as part of their • In cases of limited
to accommodate indicators QI work capacity, request
needed to monitor priority • Support health facilities technical support from
MNCAH QoC indicators on an to analyse and make subnational health
ongoing basis decisions about which authorities to support
• Develop and disseminate indicators they need to QoC measurement and
national standard operating measure and monitor monitoring efforts for the
procedures and protocols locally and those they purpose of improvement,
on how to collect, collate, can monitor based on the including developing or
aggregate, report, analyse, existing HIS, and for how selecting appropriate
and use MNCAH QoC data long indicators and analysing
to support QI across health data as part of QI
• Provide any additional
system levels implementation
resources needed by health
facility managers
In 2022, WHO developed a set of priority paediatric and young adolescent QoC
indicators (PQoC) for measuring QoC standards for children and young adolescent in
health facilities (see detailed metadata in Accompanying material 1). Through funding
provided by USAID, WHO worked with five countries (Kenya, Laos, Malawi, Sierra
Leone and Uganda) to support the uptake and integration of the 25 PQoC indicators
within their national HIS. This case study describes the process that was followed in
Uganda to assess the readiness of the national HIS to measure these indicators, and
how the indicators were prioritized for incorporation into the national HIS based on
the assessment results and national and subnational priorities.
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 51
4 Step 1: Assessing HIS readiness to measure prioritized QoC indicators
Assessing and The assessment of HIS readiness to measure the 25 PQoC indicators began with site
strengthening visits to a diverse sample of 10 health facilities in and around Kampala. These facilities
health represented varying levels of service delivery, and the range of services provided,
information
including two referral hospitals, two general hospitals, three level IV health centres,
systems
and three level III health centres. Two teams were formed, each consisting of WHO
and Ministry of Health staff, who were oriented on the 25 PQoC indicators and briefed
on the site visit process. In each facility, the assessment started with a courtesy call
to the management of the health facility. A dedicated health facility staff member
was assigned to take the assessment team through the various service points where
patients receive care (e.g. outpatient department, emergency unit, inpatient unit).
The assessment team was able to collect or take photos of all the blank tools used
to record data about the patient at each service point.
After the health facility assessment, the teams that visited the health facilities
reconvened for three days in a centralized place to map the 25 indicators and their
metadata against the recording and reporting tools that were reviewed and collected
during the health facility assessment. The mapping team included technical focal
points for child and adolescent health and HIS in the Ministry of Health, senior and
mid-level leadership from selected health facilities that were visited, and delegations
from USAID and WHO.
The HISLA tool (5) was used to assess whether specific indicators and their data
elements can be collected, reported and used in the national HIS. The data sources
for each indicator and/or the data elements were also noted for later use during
ensuing consultations regarding indicator adoption. The determination as to whether
an indicator could be adopted and measured in a specific timeframe was based
primarily on the availability of the indicator data elements in the HIS.
During the mapping process, the teams were able to map PQoC indicators and the
corresponding data elements to 54 HMIS registers and 10 additional complementary
routine data recording forms. Data from these tools are aggregated and captured
in five reporting forms at health facility level, and the summary data are eventually
uploaded onto DHIS2. Fig. 10 provides a summary of information flow for child health
data within the Uganda HIS based on the mapping results. In summary, 11 (44%) of
mapped indicators were recommended for adoption and use in the short term, 11
indicators (44%) in the long term, and three indicators (12%) could not be assigned to
52 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 10. Overview of the Uganda HIS landscape in
relation to child health QoC data
any timeframe as the mapping team could not agree on the utility of these indicators
in the local context. Fig. 11 shows the number of indicators for which 100% and <100%
4
HMIS FORM 033b Assessing and
data elements required to calculate them are available in the Uganda national HIS.
strengthening
Fig. 10. Overview of the Uganda HIS landscape in health
54 HMIS of the Uganda HIS landscape in relation to child
Fig. 10. Overview
relation to child health QoC data
registers information
health QoC data HMIS FORM 105 systems
0
100% of data 2 4 3 8 10
11 12
elements available
Number of indicators
0 2 4 3 8 10 12
Number of indicators
Step 3: Multistakeholder briefing and planning meeting
4. Assessing and strengthening health information systems to measure and monitor prioritized quality of care indicators 53
5
Chapter
56 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
5.3 Key terms and concepts
Improvement teams often track QoC indicators using a ‘run chart’ to analyse the effects of
5
the changes they are making at specific points in time, assess whether care is improving Tracking and
over time and guide iterative cycles of change to improve care. analysing QoC
indicators
A run chart is not the only way to display data graphically over time. For example,
sometimes health workers use bar charts to display QoC indicator results at discrete
points in time. However, because a bar chart typically displays data aggregated over
multiple time points – such as combining several data points to visualize a baseline
time period and a subsequent time period for comparison – it can be challenging to
assess whether changes implemented at a specific moment in time were associated with
improved QoC indicator results, a key goal of measurement in a QI process.
Change
introduced
100%
90%
Percentage of women reporting being
verbally abused during delivery
80%
70%
60%
50%
Average = 54%
40%
30%
20% Average = 27%
10%
0%
Before After
Note that:
The key message here is that the same dataset can yield very different insights depending
on how the data are distributed and visualized over time. Closer examination of the data
displayed in Fig. 12 using a run chart format leads to a different interpretation of results.
Because an important purpose of measurement in improvement is to assess the effect
of iterative changes (interventions) on QoC in close to real-time, a run chart that displays
continuous data over time allows QI teams to analyse the effect of the changes as they
are introduced. Teams use this information to decide which changes should be adopted,
adapted or discontinued, and to monitor cumulative progress toward achieving specific
aim(s).
58 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 13 . Illustrative run chart scenarios (a, b, c) with trend data
for the period before and after a change idea was introduced to
reduce verbal abuse reported by women during delivery
Fig. 13. Illustrative run chart scenarios (a, b, c) with trend data for the
period before and after a change idea was introduced to reduce verbal
abuse reported by women during delivery
5
Tracking and
Hypothetical run chart a
analysing QoC
Change indicators
introduced
Before After
Average = 54% Average = 27%
100%
89%
90%
Percentage of women reporting being
verbally abused during delivery
Feb-20
Mar-20
Apr-20
May-20
Jun-20
Jul-20
Aug-20
Sep-20
Oct-20
Nov-20
Dec-20
Jan-21
Feb-21
Mar-21
Apr-21
May-21
Jun-21
Jul-21
Aug-21
Sep-21
Hypothetical run chart b
Before After
Average = 54% Average = 27%
100%
90% 89%
Percentage of women reporting being
78%
verbally abused during delivery
80%
70% 67%
62% 60%
60% 56%
50% 54%
50% 40% 43%
40% 32%
43%
30% 41%
30% 23%
20% 23%
21%
10% 14% 5% 3%
2%
0%
Jan-20
Feb-20
Mar-20
Apr-20
May-20
Jun-20
Jul-20
Aug-20
Sep-20
Oct-20
Nov-20
Dec-20
Jan-21
Feb-21
Mar-21
Apr-21
May-21
Jun-21
Jul-21
Aug-21
Sep-21
Before After
Average = 54% Average = 27%
100%
90%
Percentage of women reporting being
verbally abused during delivery
80% 75%
70% 67% 67% 65%
51%
60% 56% 56%
50% 54% 45%
40% 45% 45% 34%
30% 23% 23% 34% 24%
20% 12% 12% 23%
10% 12% 15%
0%
Jan-20
Feb-20
Mar-20
Apr-20
May-20
Jun-20
Jul-20
Aug-20
Sep-20
Oct-20
Nov-20
Dec-20
Jan-21
Feb-21
Mar-21
Apr-21
May-21
Jun-21
Jul-21
Aug-21
Sep-21
The same steps and principles from the example below are applicable for generating a run
chart in Microsoft Excel without inbuilt automation.
60 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 14. Visual display of a run chart without indicator data
NOTE! 5
When deciding on the time interval for monitoring QoC indicators, it is important Tracking and
to consider the expected time interval to enable detection of a change in results analysing QoC
indicators
after introduction of a change or intervention.
Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)
100%
Percentage of newborns with hypothermia
90%
80%
70%
60%
50%
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks
• Next, plot the indicator data values against time in the order they were collected, and
then connect the data points on the graph with a line as shown in Fig. 15.
Fig. 15. Visual display of a run chart with baseline indicator data plotted
against time
Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)
Baseline period
100%
Percentage of newborns with hypothermia
90%
80%
70%
60%
50%
40%
30
30% 26 27 26 26 24 26
22 20 21
20%
10%
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks
Fig. 16. Plot of indicator values against time with a baseline median line
Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)
Baseline period
100%
Percentage of newborns with hypothermia
90%
80%
70%
60%
50%
40%
30 29
30% 27 25 27 25
19 Baseline median = 25
20% 15 18 15
10%
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks
62 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Box 6. Calculating a baseline median based on 6–10 initial data points 5
In basic terms, the median represents the middle value in a set of numbers, where Tracking and
analysing QoC
half of the numbers are below the median and the other half are above it (42). You
indicators
can quickly calculate the median using Microsoft Excel, or by hand. Assuming the
baseline period in the dataset shown in Fig. 16 spans from Week1 to Week10 (the
number of data points needed to determine the baseline median usually ranges
between 6 and 10), the median can be calculated as follows:
• Begin by sorting the numbers in the dataset in ascending order. The original
order is as follows: 15, 18, 15, 19, 27, 25, 27, 30, 25, 29.
• After sorting in an ascending order, the numbers are arranged as follows: 15, 15,
18, 19, 25, 25, 27, 27, 29, 30.
• If the dataset contains an odd number of data points (e.g. 7), the median is
simply the middle value. However, if the dataset contains an even number of
values, the median is calculated as the average of the two middle numbers.
• In this case, the dataset contains 10 values, so the median is the average of
the two middle numbers (i.e. the 5th and 6th numbers) which are, 25 and 25.
Fig. 17. Plot of indicator values against time with an extended
Therefore,
baseline median the median is calculated as (25 + 25) / 2, which equals 25. This value
line
is shown in Fig. 15.
Fig. 17. Plot of indicator values over time with an extended baseline
median line
Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius)
90%
80%
70%
60%
50%
40%
30 29
30% 27 25 27 25
19 Baseline median = 25
20% 15 18 15 17
10 9 10.5
10% 5
2 1 2 1 1 1 3 2 2 1
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks
In this example, the 1st event was an on-site training for all maternity nurses on immediate
and early newborn care. This training was introduced in Week7 as part of the QI initiative
to ensure nurses and auxiliary nurses were aware of and gained skills to implement
evidence-based guidelines for newborn care. The skills covered in the training included
immediate skin-to-skin contact with the mother, drying and covering the newborn,
assessing breathing to rule out asphyxia and resuscitating within the first minute after
birth if needed, checking the newborn’s temperature one hour after birth, and supporting
initiation of breastfeeding within the first hour after birth.
The maternity QI team, which included nurses and midwives, used QI tools including a
process map and the 5 Whys to investigate why hypothermia rates were high at baseline.
They discovered that immediately after cutting the umbilical cord, most newborns,
even those not showing signs of asphyxia, were being taken away from their mothers
for examination, weighing and a sponge bath. This practice delayed skin-to-skin contact
with the mother and exposed newborns to an increased risk of hypothermia. When asked,
the health workers in the maternity explained that this had been the routine for at least
five years, as they believed it was important to examine, weigh and clean the newborn
right away. After some resistance, particularly from auxiliary nurses who provided sponge
baths for newborns, the QI team decided to change this practice. During Week10 they
introduced the 1st change idea, which involved placing newborns immediately after birth
on the mother’s chest, wrapping them in a clean cloth, and covering the baby’s head with
a hat brought by the family.
During the initial test period (Week10 to Week14), the team was surprised to see that
the percentage of newborns with hypothermia dropped quickly (see Fig. 18). The team
decided to adopt this change as the new standard for newborn care. However, some
newborns continued to have temperatures below 36.5°C (hypothermia). The team looked
for other causes and realized that the maternity ward’s windows were often open, even
during the rainy season, and staff sometimes used a window air conditioner on hot
days. To prevent the room from getting too cold, the team introduced a 2nd change idea
during Week14. This involved assigning one staff member per shift to check and record
the ambient temperature in the delivery room and make any needed changes to maintain
the ambient temperature between 22–26°C, as recommended by WHO (e.g. closing the
maternity windows, turning off a window air conditioning unit). During the second test
period (Week14 to Week25), the team noted an even greater decrease in the percentage of
newborns with hypothermia. Although it required some extra effort to check the delivery
room ambient temperature each shift, the QI team decided that this was a change they
could adopt as regular practice without too much difficulty, incorporating it into the unit’s
standard operating procedure.
64 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 18. Annotated run chart to show when specific events and changes
were introduced to reduce hypothermia in newborns 5
Tracking and
Percent of newborns with hypothermia at one hour of birth (temperature < 36.5 celcius) analysing QoC
Baseline period
indicators
QI period
100%
Percentage of newborns with hypothermia
90%
1st
change idea
80% Immediate skin
to skin care
70%
1st
event
60% Nurses trained 2nd
on newborn change idea
50% care Monitoring
ambient
temperature
40%
30 29
30% 27 25 27 25
19 Baseline median = 25
20% 15 18 15 17
10 9 10.5
10% 5
2 1 2 1 1 1 3 2 2 1
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks
35%
Indicator name
30%
25%
Shift
20%
15%
Trend
Median = 11.5
10%
5%
0%
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37
Time (days)
Upon examining the pattern of the indicator results in Scenario 2 in Fig. 19, it is clear that
the data points fluctuate randomly around the median line between Day24 and Day38.
This suggests that any changes made during this period probably had little to no effect
on the indicator result and that the pattern of data in Scenario 2 data indicates “random
variation.”
How can QI teams determine if the changes they are implementing are associated with
real improvement? Scenario 1 in Figure 19 shows a different pattern of data points around
the median line from day 1–23.
• Between Day16 and Day23, there are more than six consecutive data points that fall
(‘shift’) above the baseline median line. Note that depending on the indicator’s desired
direction, these data points might also appear below the median line.
• On Day13, there is an outlier data point situated above the baseline median line. Note
that this outlier could also be positioned below the median line, depending on the
direction of the indicator.
• Between Day1 and Day11, there are more than five consecutive data points showing
a consistent trend in a single direction (in this case, upward). Note that these data
points might also display a downward trend based on the indicator’s desired direction
of improvement.
These features are referred to as a shift, astronomical data point, and a trend, respectively,
when applying run chart rules. They suggest that the data pattern being observed
represents non-random variation in the dataset. In other words, there is a significant
likelihood that the observed patterns in data are associated with one or more changes
introduced during the period of measurement.
A shift or a trend of the plotted data in a run chart, in the desired direction of a QoC indicator,
are signals of likely non-random variation in the data set resulting from one or more
specific changes introduced during the measurement period. Evidence of non-random
66 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
variation in a QoC data set associated with the introduction of specific change(s) indicates
that the change(s) are the likely reason for the improvement in the QoC indicator result
during the measurement period. The application of these rules to interpret data in a run
5
Tracking and
chart are illustrated below. The shift rule is illustrated in a hypothetical QI initiative to analysing QoC
improve diagnosis and treatment of children with pneumonia. The trend rule is illustrated indicators
in a hypothetical example to strengthen blood pressure measurement in pregnant women
as part of a QI initiative to improve detection and management of hypertensive disorders
of pregnancy (e.g. pre-eclampsia, gestational hypertension).
Rule 1: A shift
Sometimes, the dataset plotted in a run chart during a QI initiative may shift above or
below the baseline median in a single direction (i.e. either all above or below the median).
A shift of six or more consecutive data points above or below the baseline median is
referred to as a shift (40). When a shift is observed in a run chart, it is likely that the data
pattern is due to the introduction of change(s) rather than due to random variation.
Fig. 20 shows the pattern of QoC indicators over time in a QI initiative to improve the
diagnosis and treatment of children with pneumonia. One of the aims and QoC indicators
in the QI initiative was to increase the percentage of children diagnosed with pneumonia
correctly treated with the appropriate dose of amoxicillin. The figure shows a run chart
plotting the pneumonia treatment QoC indicator over time. At baseline, before the start of
the QI initiative, the median percentage of children with pneumonia receiving the correct
amoxicillin treatment was 45%, based on a retrospective review of paediatric outpatient
records. Between Week11 and Week15, the facility QI team tested two sequential changes
to address two underlying causes of poor quality of pneumonia care that they identified
using two standard problem analysis tools, a fishbone diagram and a process map:
e) Lack of time in the busy outpatient clinic for providers to measure the respiratory rate
of children, making pneumonia diagnosis difficult in individual children.
f) Frequent stock-outs of amoxicillin in the facility, occurring weekly on average.
Two changes, shown in the annotated run chart in Fig. 20, were introduced by the QI team;
• 1st change idea: Trained auxiliary workers measure and document the respiratory
rate of all children before being seen by a provider, facilitating quick identification of
children with cough and rapid breathing.
• 2nd change idea: Initiation of a daily quick morning huddle with the facility outpatient
nurse in charge and the pharmacist to check and document the stock of essential
medications, including amoxicillin – with an automatic ‘pull’ order for amoxicillin when
the amoxicillin stock drops to less than 50 treatment courses (rather than waiting for
the usual monthly order to arrive).
78 76 76 78
80% 75 74
70 New median = 78
70%
60%
2nd
55 53 change idea
49 50 49 47 Daily
pneumonia
50% 45 monitoring of
42 43 45 44 amoxicillin
40 supply Baseline median = 45
40%
30% 1st
change idea
Auxiliary workers
20% check respiratory
rate before
10% provider
consultation
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Time in weeks
As can be seen in Fig. 20, there is a shift in the data around Week13 in which six consecutive
data points are plotted above the baseline median line during the time period when the
QI team was introducing changes to improve diagnosis and treatment of pneumonia in
2nd
children. change idea
Daily monitoring
of amoxicillin
The shift shown in Fig. 20 demonstrates that the increase supply in the proportion of children
correctly treated for pneumonia from Week13 to 1st Week25 was likely due to the changes
introduced by the QI team at Week13 and changeWeek15 idea rather than the result of random
Training on
variation. Because the improved indicator resultof
measurement
the respiratory
remained stable until Week25 (6–7 weeks after ratethe
NOTE!
shift was measured) the QI team can calculate a
new median (in this case a 2nd median) to interpret Sometimes, a data point
indicator results as they introduce new changes after that is part of a visible shift
Week25 to try to improve the percentage of children may fall on the median, in
with pneumonia correctly treated from a median of which case it should not be
78% to a new third median greater than 90%. counted.
Rule 2: A trend
Sometimes, the dataset plotted on a run chart may show a clear pattern of consecutive
data points trending in a single direction (i.e. either going up or going down). Where there
are five or more consecutive data points increasing or decreasing in a single direction,
the pattern is referred to as a trend (40). When a trend is observed in a run chart, it is likely
that the data pattern is due to specific change(s) rather than a result of random variation.
There are two important principles to apply when identifying a trend in a run chart:
• Ignore the median line and simply look for five or more consecutive data points either
going up or going down.
68 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
• If two consecutive data points are of the same value, they should be considered as
one data point. 5
Fig. 21 shows the results of a QI initiative to improve the systematic measurement and Tracking and
documentation of blood pressure in pregnant women at every antenatal care (ANC) analysing QoC
indicators
visit. High blood pressure is associated with leading causes of maternal mortality and
morbidity including gestational hypertension, chronic hypertension, pre-eclampsia and
eclampsia. A blood pressure (BP) check at every ANC visit is crucial for the early detection,
diagnosis and management of hypertensive disorders of pregnancy.
As shown in Fig. 21, between Week10 and Week12, a training intervention and two changes
to the usual ANC process were implemented by the facility in charge and the QI team to
try to improve the measurement and documentation of BP at every ANC visit:
Fig. 21. Illustration
• Training of a trend
intervention: in a run
One-day charttraining
refresher demonstrating
for all ANC health care providers
improvement in the percentage of pregnant women checked
and auxiliary health workers on hypertensive disorders of pregnancy, including the
for blood pressure
correct methodduring ANC BP and the importance of a quality BP check at every
of measuring
ANC contact.
• Change: Addition of a new column in the ANC register to document a woman’s BP value
at every ANC visit (in addition to documenting the BP value in the woman’s ANC card).
• Change: Re-organization of ANC service flow to add a new step in the busy clinic: a
trained auxiliary health worker checks and documents every pregnant woman’s BP and
weight before the midwife sees the woman with weekly quality checks of measured
BP values in at least five pregnant women
90% 84 85
78 79 80 80
80% 75
70
70% 65 65 67
63 64 62
check during ANC
60 62 62 60 62 61
60%
50%
40%
Training
30% intervention
and change to
ANC patient flow and
20% BP documentation
10%
0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
Time in weeks
As can be seen in Fig. 20, soon after the training intervention and changes to ANC
processes to improve measurement and documentation of blood pressure at every ANC
contact, there is a noticeable trend starting around Week12, when eight consecutive data
To help subnational managers quickly review and interpret QoC data from multiple
facilities, ‘small multiple’ data visualizations can be used (see Fig. 24 and the country
sample in section 5.5). This involves displaying run charts for a common QoC indicator
from different facilities side by side. These visualizations can be created on paper, in
Microsoft Excel, or using data visualization software such as Tableau.
Managers can use both subnational-level data (e.g. district data aggregated across
facilities) and individual facility-level data to inform management decisions. For example,
they might encourage high-performing facilities to share their successful changes with
other facilities during learning exchange visits, provide extra supervision to lower-
performing facilities, or organize study tours to top-performing facilities. They can also
monitor aggregated data to track and share overall progress at subnational level with
participating facilities, other regions and with national-level stakeholders.
By segmenting data with an equity lens (e.g. race, gender, ethnicity, etc.) subnational or
facility managers can also monitor how changes in QoC process and outcome indicators
differ across sub-populations. Based on these results, they can analyse and address root
causes of inequity in the local setting to guide interventions to reduce disparities in QoC.
For subnational equity analysis to be possible, equity stratifiers need to be collected at the
point of primary QoC data collection (e.g. ethnicity, marital status). The country example
in section 5.5 includes an example of segmenting QoC indicator results for adolescents.
Alongside quantitative data, qualitative insights are essential for evaluating the feasibility
and acceptability of changes being tested by QI teams. These insights help teams
decide whether to adopt, refine and retest, or abandon specific changes. For instance,
reorganizing an emergency department triage process might result in the more timely
70 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
identification and diagnosis of children with severe acute health conditions. However, if
focus groups with health workers and clients reveal widespread dissatisfaction with the
reorganized triage system, it will be important to explore alternative changes to improve
5
Tracking and
the triage process that are more acceptable to facility health workers and clients alike. analysing QoC
indicators
Qualitative data is particularly important for understanding ‘what matters most and why’
to patients, families, and health workers, shedding light on opportunities to improve
people-centred health care and the experience of patients, families and health workers.
Adaptive management approaches such as ‘after action reviews’ and ‘pause and reflect
sessions’ provide valuable qualitative insights that can strengthen the management and
adaptation of activities in a QI initiative. Box 7 describes the benefits of qualitative data
in QI measurement.
Table 10. Summary of key actions for national, subnational and facility
levels
72 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
calculation of client experience indicators among adolescent clients. Because there is
limited literature on incorporating regular measurement of client-reported experience
of care into FP programs and QI initiatives in resource-limited settings, mixed methods
5
Tracking and
implementation research assessed the feasibility and acceptability of using a FP client analysing QoC
questionnaire in a multi-site QI initiative. indicators
Kenya has a well-established quality model for health and has recently adopted
national health standards for improving quality of care for maternal, newborn and
child health based on WHO standards. In 2021, the Kenya Ministry of Health published
national FP standards. Standard 2 (QI) specifies that facilities should monitor quality
of FP services including client satisfaction and interventions to address gaps and
should conduct and analyse client exit interviews using a standard form. Standard 8
(service delivery and counselling) specifies that clients should receive information,
education and communication to make informed choices on FP methods. Standard 12
(respect and dignity) specifies that clients should receive care that ensures respect,
dignity without discrimination, autonomy, privacy and confidentiality.
QI initiative
In 2022, the MOMENTUM Country and Global Leadership Project in Kenya, funded
by USAID, worked with the Ministry of Health and county and sub-county managers
in Vihiga and Homabay counties to design and implement a QI initiative to support
roll-out of the national FP standards with a focus on QI (standard 2), service
delivery and counselling (standard 8) and respect and dignity (standard 12). Eight
high-volume facilities – with support from county and sub-county managers, the
Midwifery Association of Kenya (MAK) and the project – worked together to apply QI
methods to achieve common improvement aims, regularly measuring and analysing
a common set of QoC indicators and sharing learning across sites. County and sub-
county managers were supported to monitor and analyse results across facilities to
guide oversight and management of activities across sites.
Improvement
aims (adapted in
each facility) FP QoC indicators Data sources
Improve quality % of women who receive PPFP method Kenya Health Information
of immediate of choice prior to discharge after delivery system (based on
PPFP summary reporting forms
populated from data in
facility maternity register)
Improve quality Respect, decision support, Client exit questionnaires
of person-centred communication (~40/month per facility)
FP counselling % clients who felt respected Calculated for all clients
and % clients who felt their preferences and clients < 20 years of
Improve quality were taken seriously age
of adolescent- % clients who felt they were given
centred FP enough information
counselling
Counselling content
% clients informed about possible
side-effects
% clients told what to do if they
experience side-effects
% clients told about bleeding changes
% clients informed about other methods
% clients told about switching to
another method
Implementation (2023–2024)
• Initial training (in-person): Initial training included basic QI training for facility
QI teams, called Work Improvement Teams (WITs) in Kenya, and coach-level QI
training for county and subcounty managers and MAK representatives to build
their skills for coaching facility WITs and managing activities of the FP QI initiative.
Training materials were based on existing Kenya Quality Model for Health and
Institute for Healthcare Improvement basic QI training materials, which were
adapted to incorporate practical skills-building exercises focused on the selected
FP aims and QoC indicators.
• Follow-up coaching (in-person/virtual): The training was followed by nine rounds
of on-site coaching visits (every 1–2 months) to the eight facilities conducted by
county/sub-county managers and MAK representatives supported by the project.
The coaching supported multi-cadre facility WITs to analyse root causes of quality
problems related to improvement aims, identify and test changes to address
74 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
causes, and monitor patterns in selected QoC indicators using run charts. On-site
coaching was augmented by regular virtual support to QI coaches and facility WITs
5
by the project from January 2023 to September 2024. Emphasis was placed on Tracking and
building the confidence and skills of county and sub-county managers and MAK analysing QoC
indicators
representatives to coach facility WITs.
• Regular monitoring (weekly to monthly): Facility WITs performed regular
calculation, visualization and analysis of the selected QoC indicators with support
from county and sub-county managers, MAK, and project staff to build core skills.
The project developed a user-friendly data entry tool that auto-generated run
charts that could then be annotated with key events and changes being tested. A
baseline median for the PPFP indicator was calculated in each facility based on
six months of retrospective HMIS data. In the absence of pre-intervention data,
start-up ‘baseline’ medians were calculated using the first three months of data
for client-reported indicators (six bi-weekly averages) from the newly introduced
client questionnaires. County and sub-county managers coached WITs to enter
their data to generate run charts and to analyse their results applying run chart
rules (described in this section, above). The county and sub-county coaches were
supported by the project to visualize and analyse aggregate results across the
eight facilities and to use ‘small multiples’ to visualize and analyse results for
individual facilities to guide management of the multi-facility QI initiative (e.g.
facilitating learning exchange visits to higher-performing facilities; increasing
coaching frequency for lower-performing facilities; planning sessions to update
skills or facilitate learning in specific areas).
• Regular learning exchange meetings: Five learning exchange meetings were
convened with county/sub-county managers, MAK representatives and facility
WITs to share results and local solutions, foster friendly competition, and engage
stakeholders in a shared journey of improvement. Representatives from other
facilities and sub-counties joined the 4th and 5th learning exchange meetings to
expose other sub-county and facility managers to the results and learning and to
explore interest in expansion of the QI initiative to new sub-counties and sites.
Selected results
Selected results are shown for two client-reported FP counselling indicators (i.e.
clients who felt respected; clients who reported they were given enough information
to select a FP method) and for the PPFP indicator on initiation of FP by recently
delivered women. Each result includes a brief synthesis of successful changes
implemented by facility WITs to achieve the measured improvements in care.
Fig. 22. Proportion of interviewed clients who felt they were given
enough information during FP counselling (all ages and < 20 years)
Percentage of interviewed clients who felt they were given enough information
(Gave a "top-box" rating of 5/5)
90 89
90% 86
83 88
79 78 78 88 86
80% 75 82
72 72 73 72 73
69 78
75 73
70%
71
67 66 66
65
given enough information
60% 57
60 61 59
57 Baseline median
50% 45
42 43 (all ages)
40% 44
41
38
30%
20%
10%
0%
Feb-23 (Half 1)
Feb-23 (Half 2)
Jan-23 (Half 2)
Apr-23 (Half 1)
Apr-23 (Half 2)
Mar-23 (Half 1)
Mar-23 (Half 2)
May-23
Oct-23
Dec-23
Feb-24
Jun-23
Jul-23
Aug-23
Sep-23
Nov-23
Jan-24
Mar-24
Apr-24
Time
76 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Fig. 23. Proportion of interviewed clients who felt they were respected
during FP counselling (all ages and < 20 years)
5
Tracking and
analysing QoC
Percentage of interviewed clients who felt they were respected indicators
(Gave a "top-box" rating of 5/5)
100%
93
90% 86 86 87
83 82 81 83 90
79 86
80% 75 82
73
69 71 77 76
70% 67 73 67 73 74
62
67 65
60% 61
60
51 58
50% 46 Baseline median
45
40 (all ages)
40% 43 40
30% 35
30
28
20%
10%
0%
Feb-23 (Half 1)
Feb-23 (Half 2)
Jan-23 (Half 2)
Apr-23 (Half 1)
Apr-23 (Half 2)
Mar-23 (Half 1)
Mar-23 (Half 2)
May-23
Oct-23
Dec-23
Feb-24
Jun-23
Jul-23
Aug-23
Mar-24
Apr-24
Sep-23
Nov-23
Jan-24
Time
As seen in Figs. 22 and 23, there is a shift of greater than six consecutive data points
above the start-up baseline median in the % clients reporting they received enough
information (Fig. 22) and felt respected (Fig. 23) during the QI intervention period. The
presence of a shift of six consecutive data points above the baseline median during
the QI intervention period indicates a high probability that results are due to changes
introduced during the QI intervention period (see explanation of the run chart shift
rule in section 5.4.2). As seen in Fig. 24, a lower % of adolescent clients than % of all-
age clients reported that they felt respected during the start-up ‘baseline’ period.
Over time, the difference in reported respect between adolescent clients and all-
age clients diminishes. Selected common changes implemented across facilities to
improve quality of FP counselling for all-age and adolescent clients are described
below.
Hospital A Hospital B
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Hospital C Hospital D
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Hospital E Hospital F
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Hospital G Hospital H
100% 100%
80% 80%
60% 60%
40% 40%
20% 20%
0% 0%
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Jan-23 (Half 2)
Feb-23 (Half 2)
Mar-23 (Half 2)
Apr-23 (Half 2)
Jun-23
Aug-23
Oct-23
Dec-23
Feb-24
Apr-24
Fig. 24 shows the individual facility results for the percentage of clients who felt
respected during FP counselling during the QI period. As seen in Fig. 24, the pattern
of results over time for % clients who felt respected varies by individual facility,
78 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
with 6 of 8 facilities demonstrating a shift of six consecutive data points above the
start-up baseline median during the QI intervention period. Fig. 24 demonstrates the
5
importance of analysing patterns in facility-specific data (in addition to aggregate Tracking and
data), since aggregate results may obscure important differences across individual analysing QoC
indicators
sites. Analysis of patterns in aggregate and facility-specific data by managers is
important to guide effective management of a multi-facility QI initiative.
Fig. 25 shows the aggregate results across the eight facilities during the baseline
and QI period for the proportion of recently delivered women who initiated a PPFP
method prior to discharge. The baseline data is based on a retrospective review of
5 the maternity records. A total of 17 841 deliveries occurred across the eight facilities
during the baseline and QI periods. Selected common changes implemented by the
Tracking and facilities are noted below.
analysing QoC
indicators
Fig. 25. Proportion of recently delivered women who initiated a PPFP
method prior to discharge in the eight facilities
80%
70% 62 65
60% 53 52 56 56
46 49
50% 41
40% 38 45
30% 27 35
23 25 24
20% 12 21
11
10%
9
6 7 8 11 15 Baseline median
0%
3
4
2
3
2
4
24
2
23
3
23
24
r -2
r -2
-2
-2
l -2
l -2
l -2
-2
-2
y-
y-
n-
n-
p
p
v
v
Ju
Ju
Ju
Ma
Ma
Ma
Ma
Ja
Ja
No
No
Se
Se
Time in months
• Designation
100%
of a ‘PPFP nurse’ to provide counselling and a FP method of choice
prior to discharge for all recently delivered women in the maternity ward.
90%
80%
• Strengthening
70% of PPFP counselling during antenatal care (ANC) visits, with
62%
systematic
65%
documentation
60% of selected FP method in ANC section of mother–child
53% 52%
booklet.
56% 56%
49%
46%
• Confirmation
50%
of FP method selected during ANC (as
41% documented in mother–child
45%
hours
38%
40% 35%
booklet) 30%
during intrapartum and postpartum
25%
care, including support for selection
27%
24%
of a different
20%
method if desired.23% 21%
12% 15%
11% 11%
9%
• Inclusion10%of spouse
6% in PPFP counselling when desired by the woman.
7% 8%
80 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
6
Chapter
Assessing and
improving data quality
to strengthen quality
improvement results
and stakeholder trust
6 6. Assessing and improving data quality
Assessing and
improving data
to strengthen quality improvement
quality
results and stakeholder trust
This chapter does not reproduce the guidance already published by WHO. Rather, it
provides guidance on localized data quality improvement (DQI) approaches, applying
the principles and methods used in QI initiatives to understand causes for poor quality
of data and iteratively test changes to improve data quality based on identified causes.
82 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
6.2.1 Vicious cycle of poor-quality data
Successful QI initiatives generate and use high-quality data to guide iterative improvement.
6
As described in chapter 4, data required for a QI programme can be obtained directly from Assessing and
improving data
various data sources in the local HIS, or QI teams can generate new data based on their quality
information needs for a specific QI programme. However, in many low-income settings,
data collected through the existing HIS may be of poor quality. The lack of quality HIS
data has created pressure for the measurement community to use alternative methods of
generating health-related estimates including mathematical modelling. Such alternative
methods continue to divert already scarce resources that are needed to strengthen HIS
(50–53). Furthermore, when the quality of data is poor, some donors may collect their
own data by establishing parallel data systems, which further fragments country-level
data pipelines. This creates a vicious cycle of poor data quality as illustrated in Fig. 26.
Fig. 26. A vicious cycle of poor data quality in HIS
Fig. 26. A vicious cycle of poor data quality in HIS
Weak health
Data not trusted information
Sometimes resulting in system capacity
alternative methods to e.g. Limited
generating health capacity to
metrics generate high
quality data
Source: (46).
In many LMICs, health facility data can be inaccurately recorded or not recorded at all
(54–56). Some of the factors that lead to poor data quality in health facility data collection
and recording tools include: illegible handwriting during data capture or reporting;
unsuitable or unstandardized data formats; lack of a standardized data dictionary in
health facilities; lack of time or motivation to record high-quality data; calculation errors;
insufficient data quality checks at data entry level; non-adherence to data collection
6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 83
6 or recording guidelines and data definitions; and a poor culture of data use which
disincentivises health facility-based teams to collect and report high-quality data (57).
In QI programs, these challenges can emerge when new QoC indicators are integrated into
Assessing and
the HMIS for routine measurement and monitoring. If health care workers perceive these
improving data
quality new indicators as an added burden, especially when their value is not yet evident, data
recording and indicator calculation may initially suffer in quality. Likewise, established
indicators can be affected by ingrained practices, such as habitual inaccuracies in data
entry, which can also undermine data quality. Efforts to improve the quality of data must
therefore address the underlying factors that contribute to poor quality of data.
84 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
6.3 Practical guidance
6
6.3.1 Localizing the assessment and improvement of data quality Assessing and
using QI methods improving data
quality
Implementing QI methods to enhance the quality of data collected and reported by health
facilities for improving MNCH care can significantly complement existing government
mechanisms, such as DQA and periodic data quality reviews. The importance of this
hybrid approach is highlighted below:
• DQA mechanisms establish guidelines and standards for data quality at a national or
subnational level, ensuring consistency and compliance across the health care system.
QI methods can provide a systematic, step-by-step process for health care facilities
to implement and operationalize these national standards. By embedding QI cycles
like the Plan-Do-Study-Act, organizations can continuously improve how they meet
national data quality standards in a practical, timely and localized manner.
• Periodic data quality reviews highlight gaps, inconsistencies and areas where data
quality falls short of national standards. These reviews provide important insights
but are often more retrospective and evaluative. QI methods are action-oriented
and can help QI teams respond to the gaps identified by these reviews. For example,
after a review identifies issues with timeliness or completeness of data for newly
introduced QoC indicators, QI approaches can help health facilities and administrators
at subnational level systematically design, test and implement interventions to address
these specific gaps.
• Data quality reviews are also often conducted at specific intervals (e.g. annually, or
semi-annually), focusing on snapshots of data quality over time. QI methods can
complement such processes by fostering continuous improvement in data quality
through integrating ongoing monitoring and real-time feedback loops. This ensures
that improvements are sustained between periodic reviews and that health facilities are
continuously improving their data collection, management and reporting processes,
not just responding to the review outcomes.
• If not implemented properly, national and subnational DQA mechanisms may also
appear to promote a top-down approach to ensuring data quality, due to their
focus on compliance and adherence to protocols. When QI methods are embedded
in such mechanisms, this can promote a culture of data quality within health care
organizations. By involving staff at all levels in continuous improvement efforts, QI
creates a bottom-up culture where data quality is seen as an ongoing priority, not just
something to be reviewed periodically.
• National and subnational DQA mechanisms also typically involve auditing and
validating data reported through national or subnational HIS, focusing on various
data quality attributes for the data that feeds into aggregated reports. Using QI
methods to improve data quality can help ensure that the data feeding into national
and subnational systems is accurate at the point of care.
6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 85
6 6.3.2 Key actions at health facility and
subnational level Illustrative
Assessing and Health facilities are the primary source of health care actors
improving data
quality data, where patient information is recorded, and services
are delivered. Ensuring data accuracy at this level is Health information
foundational for generating reliable reports at subnational, officers, in consultation
with QI team members
national and global levels. Quality individual patient data
and subnational HIS
at the facility level also directly influences the quality of and M&E managers as
patient care because accurate individual patient data is appropriate.
essential for clinical care decision-making (e.g. making
an accurate diagnosis; selecting and accurate dosing an
appropriate medication; accurately monitoring a patient’s clinical status; providing
appropriate follow up care). By improving data quality at the source, the entire data
reporting and utilization chain benefits from cleaner, more reliable data. This reduces
the need for extensive data cleaning or validation at subnational or national levels,
streamlining reporting processes and improving overall system efficiency. However,
subnational actors also have a role to play in ensuring the quality of data submitted from
health facilities. The key actions at both levels are reviewed in this section of the chapter.
• Start by collecting information that clearly defines and demonstrates the existence and
scale of data quality problems. Both qualitative and quantitative data are valuable at
this stage.
– For instance, in a multi-site QI initiative overseen at the subnational level, the
manager might observe unusual patterns in the completeness and timeliness of
data submitted by health facilities for newly introduced QoC indicators prioritized
for long-term monitoring of integrated management of childhood illnesses (IMCI).
To validate these observations, the manager could commission a data quality
audit of historical data. This audit would help quantify the percentage of missing
fields, incorrect entries and delays in data submission. Establishing this baseline
provides measurable evidence of the problem and offers a concrete starting point
for monitoring progress toward improvement.
– Once the audit has been completed, the manager may also convene the
subnational and/or the concerned health facility QI teams during the regular
QI meeting, to discuss where the problem is occurring, who is involved or
affected by the problem, when it occurs, and the consequences of the having
this problem in this QI initiative. From here, a clear and concise problem
statement that captures the issue in specific and measurable terms can
be prepared. In this case, a problem statement can be written as follows:
Health facilities X and Y have been reporting incomplete and inaccurate data for
the IMCI QoC indicators, with over 30% of fields left blank or filled incorrectly in
86 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
routine reports over the past three months. This has delayed timely decision-making
and resource allocation, undermining efforts to improve IMCI-related processes and
outcomes.
6
Assessing and
• When multiple data quality problems are identified it may be necessary to prioritize improving data
problems based on their urgency, the feasibility of improvement, and their impact quality
on the QI initiative. Prioritization ensures that the team focuses on the most critical
issues first.
• Consider creating a process map or flowchart that illustrates how data are collected,
entered, processed and reported. Understanding the flow of data helps identify
potential points where errors could occur. Mapping patient flow and data flow at
service points is also useful in visualizing where problems arise. Chapter 5 explains
and gives an example of how data flow mapping can be conducted in health facility
for any technical area.
• Consider applying structured problem-solving tools to uncover the root causes of
data quality issues. There is a host of such tools used in mainstream QI which can be
adapted to a data QI improvement initiative. However, this chapter does not go into
details to explain what each of these methods entails but provides some illustrative
examples of their application in data QI.
– Fishbone diagram (also known as an Ishikawa diagram): This is a visual tool used to
categorize potential causes of a problem into different types (e.g. people, process,
technology, environment and data). Each category helps the team brainstorm
possible sources of a given issue (58). Using the IMCI example, assuming the
identified problem is inaccurate data, the ‘People’ category could include issues
like insufficient training of staff, while the ‘Process’ category might include lack of
standardized data entry procedures.
– 5 Whys: This is a simple but effective method of asking ‘why?’ repeatedly (usually
five times) to drill down into the root cause of the problem. This technique helps
reveal deeper, systemic causes that might not be immediately obvious (59). Using
the IMCI example, assuming the identified problem is data entry errors, the 5 Whys
could be developed as follows:
i. Why are there frequent data entry errors for the newly introduced IMCI QoC
indicators?
Because the staff entering the data often makes mistakes while inputting
patient information.
ii. Why do staff make mistakes while inputting patient information?
Because they are rushed due to the high workload and insufficient time to
enter data carefully.
iii. Why is the workload so high for the staff responsible for data entry?
Because there aren’t enough staff members assigned to handle both patient
care and data entry tasks.
6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 87
6 iv. Why aren’t there enough staff members to manage both patient care and
data entry?
Because the facility has not prioritized hiring additional staff or reassigning
Assessing and roles to specifically address data management needs.
improving data
quality v. Why hasn’t the facility prioritized hiring or reallocating staff for data
management?
Because there is a lack of awareness of the importance of accurate data and
no budget allocated for expanding the team dedicated to data quality.
– Pareto analysis: This is a technique used to identify the most significant causes of
a problem, based on the principle that a small number of causes often contribute
most to the problem (the 80/20 rule). By prioritizing the most common causes,
the team can focus their efforts effectively (60). For example, a Pareto analysis of
data quality issues related to the IMCI QoC indicators might reveal that 80% of
data inaccuracies are caused by just 20% of health workers who were not trained
in the new data collection system.
• Consider engaging the right stakeholders on the QI teams to gather credible insights
that may not be evident during data review. Such insights can be gathered during the
same QI meetings mentioned in the previous action on problem identification and
definition.
• After identifying the root causes, prioritize them based on their impact on the data
quality issue and the feasibility of addressing them. This ensures that the team focuses
its efforts on the most significant causes that are within their control to change. For
example, if training is identified as a major root cause, and it is feasible to implement
a structured training programme, this issue may be prioritized over technical issues
that require extensive resource investments.
• Depending on resource availability, it may be helpful to confirm the identified root
causes by reviewing data, engaging stakeholders and testing assumptions. For example,
the team may choose to validate that lack of training is a root cause of inaccurate data
recording by reviewing staff training records or observing data entry practices in real
time.
• Once the root causes have been identified and prioritized, the QI teams can engage in
brainstorming sessions to generate ideas for change. All team members, regardless of
their role, should be encouraged to contribute suggestions for solving the identified
problems. When trying to find a potential solution, it is important to:
– Ensure that change ideas directly address the root causes uncovered during the
analysis phase. For example, the solutions should target the systemic, human,
technical or process factors that are contributing to the problem, whichever is or
are the root cause(s).
88 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
– Leverage all members of a multidisciplinary QI team. Clinical staff might suggest
workflow changes, data clerks might offer insights into improving data entry tools,
and IT staff might propose technological upgrades. Finding a balance between
6
Assessing and
several and sometimes conflicting perspectives may be tough but is important, improving data
nevertheless. quality
• After generating a list of potential change ideas, the QI team should evaluate and
prioritize them based on several criteria:
– Can the change idea be realistically implemented with the available resources,
staff, and infrastructure?
– Will the change idea significantly improve the identified problem? It is important
to prioritize interventions that address the most critical root causes and have the
potential for high impact. For example, if the QI team is considering both training
workshops for existing staff and hiring additional staff, they might prioritize
training workshops if they are more immediately feasible and expected to have a
more direct impact on data quality.
– Can the intervention be maintained over time? Consider whether the change can
be easily integrated into routine practices or workflows.
– Is there existing evidence or previous experience suggesting that the intervention
will likely lead to improvement?
• For each selected intervention, the QI team should create an action plan, including:
– The specific improvement to be achieved by implementing the selected
intervention(s) (e.g. to reduce data entry errors by 20% within three months).
– The steps required to implement the intervention, including who is responsible
for each step and what resources will be needed. An implementation plan may,
for example, include developing a training curriculum on data entry standards for
the indicators of interest, scheduling and conducting the training session for all
data clerks and health workers, and providing data entry jobs such as checklists
or guides at each service point.
– A timeline for implementing the intervention, including key milestones and
deadlines.
– Indicators to assess whether the intervention is leading to improvement. These
indicators should be specific, measurable, achievable, relevant and time-bound
(SMART).
• Consider carrying out small-scale testing (i.e. a pilot) of the intervention depending on
the geographical scope. If the intervention is to be implemented across several health
facilities, it would be important to test it on a small scale before full implementation
to see how it works in practice. Testing allows the QI team to refine the idea, address
any unforeseen challenges, and gather data on its effectiveness before scaling up. It
is also important to start small by testing the intervention in a single service point or
group of staff, which would allow teams to manage issues that arise during the testing
phase more effectively. For example, if the change idea involves introducing a new
data entry process, the QI team can pilot the process in one unit of the health facility
to gather feedback and assess impact before rolling it out more widely.
• It important to define timebound indicators to monitor the effect of the priority
interventions on the data quality problem. The performance indicators might be
quantitative (e.g. % of records with data errors, timeliness of data reporting) or
6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 89
6 qualitative (e.g. staff satisfaction, ease of use). During the testing phase, the QI team
should closely monitor the implementation and collect data to track progress. This
involves gathering feedback from staff involved in the testing and monitoring the
Assessing and
selected indicators regularly. For example, after training staff in a new data entry
improving data
quality protocol, the QI team in participating health facilities might review a sample of data
records every week to see if errors are decreasing. Additionally, they could conduct brief
enquiries (e.g. interviews) with data clerks or health workers to gather perspectives
on whether the new process is easier or more difficult to use.
• During the planning phase, it is important to anticipate potential challenges or
resistance that may arise during the testing phase. For example, the staff may initially
resist changes due to unfamiliarity or perceived increased workload, or technical issues
with data entry tools may arise. It is therefore important to create contingency plans
to address these challenges proactively, for example by developing a plan for ongoing
support, such as regular onsite support conducted by subnational QI teams.
• Before fully implementing the intervention at scale, ensure that all necessary resources,
staff and materials are in place. Preparation may include: communicating the upcoming
change to all relevant stakeholders at national and subnational levels to ensure that
they are aware of the purpose, process and their roles in the implementation; providing
training or orientation to individuals who will be directly involved in implementing the
intervention; and ensuring that any tools, templates or technological systems needed
for the intervention are ready and accessible.
• Closely monitor the process as the interventions is being implemented in new sites
as this is important for the identification of any unexpected issues, bottlenecks or
challenges that may arise. Monitoring also involves gathering feedback from those
involved in the implementation. Data can be collected for the same indicators used
during the pilot phase to evaluate the effectiveness of the change across the settings
of interest.
• Throughout the implementation phase, it is important to engage with staff and other
stakeholders to gather their input and feedback. This can be done at the same time
the regular QI meetings are taking place, where both data quality issues and the wider
QI related issues can be discussed simultaneously. This helps identify areas where
the changes are working well and where adjustments may be needed. Staff buy-in is
critical to the success of the change, so fostering open communication is essential.
• If any issues arise during the initial implementation of an intervention, the QI team
should be prepared to troubleshoot problems and make minor adjustments to the
process as needed. However, major changes to the plan should be avoided unless
necessary, as they could undermine the validity of the change ideas. For instance, if
data clerks and health care workers are struggling to adapt to a newly adapted health
facility register or case note, the QI team might provide additional training or technical
support. If a specific workflow is causing delays, the team could adjust the sequence
of steps to improve efficiency.
90 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
• Consider maintaining detailed accounts of what was implemented, how it was done
and any deviations from the original plan. Documentation is crucial for analysing the
results during improvement and for replicating the change if it proves successful.
6
Assessing and
improving data
e. Evaluate progress and opportunities for further implementation quality
It is important for QI teams to evaluate the effectiveness of the implemented interventions
to determine whether they led to the desired improvements. This phase involves a careful
and systematic analysis of the data collected during the large-scale implementation
phase, comparing it to baseline data and the objectives set during the planning phase.
• For quantitative data, consider applying the same methods described (in Chapter 4) on
how to analyse and visualize data collected routinely in health facilities. The principles
remain the same although other data analysis techniques can be used to assess the
significance of change. The QI team might be interested to see if the performance
steadily improved over time or if it fluctuated, or whether there are specific days or
times when errors spiked or dropped. This kind of information can feed into the next
phase of improvement planning. They may also segment the data based on the roles
of the data capture team, service point, etc.
• For qualitative data it might be useful to review feedback from staff and stakeholders
to assess whether the change is perceived as effective and sustainable. For example,
the baseline data showed a 20% error rate in data entry before the intervention. After
implementing a new training programme, the error rate dropped to 10%. This meets
the QI team’s objective of reducing errors by at least 50%, but may not be sustainable if
the same resources needed to bring about this change are not maintained. It may also
be important to consider if there have been unintended consequences of the change,
such as increased workload, reduced efficiencies in the routine care of patients.
• Once the data has been analysed, consider holding a meeting with the relevant QI
stakeholders to discuss the results and their implications. It is important to encourage
an open discussion on what worked, what didn’t, and what could be improved. This
meeting may also be a conducive platform to make decisions as to whether the
intervention should be adopted on the basis that it led to significant improvements
with minimal unintended effects across different settings, or be adapted if it had missed
results. If the intervention did not lead to improvement or created unintended negative
results across settings, it may be necessary to abandon it and explore alternative
solutions.
• If the QI team decides to adopt the intervention to improve data quality for long-term
implementation, the next step is to standardize the process to ensure that the new
practice becomes the norm and is consistently followed across all concerned health
facilities. This may include detailed documentation of protocols, workflows, roles and
responsibilities to serve as a reference for all staff to ensure consistent implementation
across sites. In some instances, it may only be necessary to revise existing guidelines,
protocols, forms or tools to reflect the standardized change.
• As the goal is to ensure that the interventions implemented can be maintained over
time and continue to produce positive results without requiring ongoing external
support or intervention, it is important to ensure that the standardized change are
embedded into job descriptions, performance evaluations, as well as local policies.
6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 91
6 Table 12. Summary of key data actions for national, subnational and
facility levels
The path to achieving this monumental goal was not easy. For PFA! to succeed,
reliable and high-quality data was essential. Data-informed decisions helped track
progress, highlighted gaps in health care delivery, and guided new actions to improve
service delivery. But like many developing health care systems, the country struggled
with data quality – completeness, accuracy, timeliness, and reliability of the data
reported in the district health management information system (DHIMS).
92 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Realizing this, PFA! launched a bold initiative to improve data quality – one that was
built on thorough planning and a deep understanding of the local context. This DQI
6
programme aimed to strengthen the data collection process, root out inaccuracies, Assessing and
and ensure that every bit of information that made it into the system could be trusted. improving data
quality
At the heart of this effort were the district health information officers and hospital
data officers. The PFA! team began by empowering them, providing intensive training
on quality planning tools to help diagnose the root causes of poor data. For many,
this was their first opportunity to develop skills in understanding and resolving data
quality issues systematically.
One of the major challenges lay in discrepancies between the raw data collected in
health facilities and the final figures entered into DHIMS. This mismatch meant that
critical information could be lost or distorted, affecting everything from patient care
decisions to the allocation of resources. In response, PFA! implemented a robust
auditing process where the data officers meticulously compared source data with
what was ultimately reported, identifying and addressing discrepancies to restore
trust in the system.
As data began to improve, PFA! introduced a system to track the progress of key data
metrics: completeness, accuracy and timeliness. Through annotated run charts, they
could visually represent the progress made, showing health care workers at all levels
how their efforts were closing the gaps and improving the quality of both the data
and the care.
PFA!’s work didn’t happen in isolation. In some geographic areas, it overlapped with
other important health care initiatives. USAID funded a project focused on improving
malaria care, while PATH was working on improving maternal and newborn outcomes.
Rather than operate in silos, the Policy, Planning, Monitoring and Evaluation Division
of the Ghana Health Service saw an opportunity to bring these projects together,
harmonizing their efforts to improve data quality at the national level. The result
was a unified approach to data QI that became the gold standard across multiple
health care domains.
At the district level, officers took proactive measures to improve data quality. They
sent letters to health facilities clarifying data flow processes, highlighted key data
issues, and reinforced reporting timelines. Facility managers were engaged in action
plans, and monthly performance updates were shared with the District Director of
Health Services, creating a culture of accountability and continuous improvement.
Ranking facilities according to their data quality performance added a competitive
edge, motivating teams to strive for better outcomes.
6. Assessing and improving data quality to strengthen quality improvement results and stakeholder trust 93
6 Data audit teams were formed, and regular monthly meetings became the norm.
Validation of summary reports became a monthly exercise, ensuring that data at
Assessing and every level of the system was scrutinized and improved upon.
improving data
quality But perhaps the most innovative change PFA! introduced was the concept of multiple
feedback loops. In the old system, data flowed only in one direction – upwards from
the health facilities to district, regional and national levels – with little to no feedback
provided to the lower levels of the health system. This unidirectional flow created a
disconnect, where health care workers reported data without knowing how it was
being used or whether it was even reviewed. PFA changed that dynamic, ensuring
that data quality feedback was provided at every level, and that lower levels had
visibility into how their data was being utilized to inform decisions.
Management teams got involved, reviewing dashboards that showcased data quality
performance. A space was created in management meetings for data officers to
present their findings, giving them a voice and recognizing their role in driving QI
initiatives.
Learning sessions became a platform for sharing best practices across facilities.
Teams gathered for periodic review meetings, where they presented their successes
and challenges in improving data quality. In a marketplace format, facilities taught
one another, cross-pollinating ideas that spread innovation across the country.
Coaching and mentoring were key to sustaining these efforts. Improvement coaches,
trained in QI and data quality, visited districts to assess the impact of changes and
to help plan future improvement cycles. Their hands-on guidance helped facilities
stay on track and continually push for better results.
By the end of the project, PFA! had fundamentally influenced, and in some respects
helped to change, how data was collected, reported and used across the country’s
health care system. Data quality improved dramatically, which in turn strengthened
the health care system’s ability to track progress, allocate resources effectively, and
ultimately, reduce preventable deaths among children under five years of age. The
ripple effect of the DQI programme reached far beyond PFA!’s immediate goals and
geographic areas, setting a standard for data-driven health care improvement across
the nation.
94 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
7
Chapter
Strengthening
quality improvement
measurement capability
of key actors
7 7. Strengthening quality
Strengthening
quality
improvement measurement
improvement
measurement
capability
capability of key actors
In line with WHO implementation guidance for improving quality of facility based MNCH
services (2) that emphasizes the subnational health system (e.g. region, district) as a
primary unit for implementing and monitoring large-scale QI initiatives, this chapter
focuses in particular on the skills needed by subnational managers and front-line QI
teams. Many of the principles, however, are applicable to building the QI skills of managers
and health workers in any large health care organization (e.g. large public or private sector
hospital).
96 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
7.3 Key terms and concepts
7.3.1 Importance of building QI measurement capabilities and
7
Strengthening
selected challenges quality
improvement
The successful design and implementation of large-scale QI initiatives to improve care measurement
and health outcomes for women, newborns and children requires, among other things, capability
that managers and health care workers possess the knowledge, skills, experience and
tools necessary to improve and measure the quality of the care they provide and manage.
Multiple actors in a health system or organization (public or private) need QI measurement
knowledge and skills to support the design, management and improvement of MNCAH
services. These actors include members of QI teams, facility managers, subnational
MNCAH and QoC programme managers and health information officers as well as
national-level policy-makers, programme managers and health information officers.
Individual actors need different skills based on their specific role supporting the planning,
implementation and monitoring of quality improvement initiatives at various system
levels. Since not everyone in a health system or organization needs the same depth of
knowledge and skills, capability-strengthening strategies should be tailored to the needs
of individual actors.
Many frontline health workers may have little to no familiarity with QoC indicators
and even less familiarity with how to measure QoC indicators over time to help guide
improvement. There may be limited opportunities for managers and health workers to
develop QI skills to implement key steps in a QI initiative, including selecting improvement
aims and QoC indicators, analysing root causes for poor QoC, iteratively testing changes
to close gaps, and regularly monitoring patterns in QoC indicators over time to track
progress and guide improvement (see Fig. 1). In many settings, health worker pre-service
education and in-service training do not include QI skills and health workers have limited
opportunities to learn skills on the job. Often, the leadership, financing, skilled expert
trainers and skills-building materials necessary for training and mentoring are lacking.
98 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
7.4 Practical guidance
Illustrative
7
7.4.1 Develop a capability-strengthening actors Strengthening
strategy quality
improvement
National and subnational managers can adapt Table 12 National and subnational measurement
managers capability
to define the core QI measurement knowledge and skills
needed by specific health worker cadres and managers
in their context. Because measurement is an integral
component of a broader process of improving care, measurement-specific skills can and
should be incorporated into QI capability-strengthening strategies and curricula. Once
required QI skills (inclusive of measurement) have been defined for specific actors and
health worker cadres, national and subnational managers should develop a strategy and
costed operational plan at appropriate system levels (e.g. district) to build these skills
applying adult learning best practices. Important platforms for building core QI skills
include pre-service education, in-service training, mentoring, supportive supervision
and professional development platforms among others.
In some cases, a QI team may designate a data officer team member to lead data
collection, calculation and plotting of QoC indicators in a run chart. In other cases, QI
teams may work together to support these steps or may rotate responsibility for this
It is important that training and mentoring for managers and supervisors (including
supervisors of supervisors such as regional ministry of health managers) build the skills
needed by managers to provide effective leadership of the planning, financing and
oversight of QI initiatives including QI capability-strengthening activities. In addition to
the skills needed by QI teams, managers and leaders need additional skills for supporting
the design and implementation of subnational QI initiatives, including selection of
improvement aims and meaningful QoC indicators, supporting QI teams and monitoring
individual site and multi-site aggregated QoC indicators to guide effective management
of subnational training and QI initiatives (e.g. using small multiple run charts – see
Chapter 5).
100 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
For example, experts from local institutions, organizations, professional associations
and other entities can support training and follow-up mentoring in many settings. Follow
up mentoring can be provided via blended virtual and in-person approaches. In some
7
Strengthening
settings, subnational QI initiatives organize periodic learning meetings across sites to quality
share and spread learning; such learning meetings can be an efficient mechanism to improvement
reinforce health worker QI skills. measurement
capability
7.4.3 Strengthen professional development platforms
In many countries, health workers must participate in continuing professional
development (CPD) activities to maintain their professional licensure. Requiring a
certain proportion of CPD activities or points to be focused on QI skills is one approach
to strengthen and reinforce skills, provided that CPD activities are of high-quality. CPD
activities may encompass a range of modalities including virtual self-directed learning
materials, training workshops in professional conferences, virtual synchronous and
asynchronous training courses leading to certification, mentored practicums, and others.
It is important that CPD activities and materials be well vetted by experts to ensure that
materials are of high quailty, apply adult learning best practices and focus on important QI
competencies. In some settings, CPD activities and materials are regulated by professional
associations.
Table 13. Summary of key actions for national, subnational and facility
levels
Given the critical role of reliable data and measurement in quality improvement
efforts, building improvement and measurement knowledge and skills of actors
at multiple system levels was integral to PFA!’s approaches. Training approaches
employed a variety of strategies to build skills for two primary roles: Improvement
advisors and improvement coaches. 10–15 improvement advisors responsible for
technical oversight of all PFA! activities received intensive training in the science of
improvement, including systems thinking, data and variation, theory of knowledge
and the psychology of human behaviour. This sequential training was provided
over a nine-month period and included three weeks of classroom-based learning
interspersed with practical on-the-job training in the field, periodic in-person and
virtual sessions to apply key concepts and receive peer feedback and one-on-one
coaching from experts. These highly trained improvement advisors were leveraged
by the project team to drive large system change including visioning, design and
adaptation, and teaching and training.
As PFA! went to scale, 400 regional, district, facility managers and QI team managers
employed in the Ghana Health Service were trained as improvement coaches
to provide more intensive support to local activities. PFA! designed a 6–8 week
progressive training course to train these staff, including three days of in-person
instructional time followed by regular facility-based mentoring and learning. With
minimal theoretical concepts, the training focused on practical skills, removing the
requirement for laptops, and emphasizing the plotting and analysis of QoC indicator
results using paper-based run charts. The training focused on the practical application
of five basic QI tools including fishbone analysis, process maps, 5 Whys, Plan–Do–
Study–Act cycles, and Pareto charts. Improvement coaches were mentored by experts
and learned on the job, visiting facility QI teams on a monthly basis to support their
use of local data to assess the effect of changes they were making based on QoC
indicator results.
102 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Brought into the centre of QI teams, information officers became critical decision
support analysts to other QI team members as they often led the process of plotting
7
and interpreting annotated run charts to identify successful, feasible and scalable Strengthening
changes based on testing. Additionally, PFA! created a learning network of health quality
improvement
information officers that helped build the skills, confidence, and status of health measurement
information officers at facility and district levels. Learning from PFA! highlights the capability
importance of targeting different knowledge and skills tailored to the roles of specific
actors and the importance of approaches that strengthen and regularly reinforce
skills over time emphasizing on-the-job skills-building and regular peer to peer
learning. Support of senior managers was critical to spread and sustainability. At the
end of the project, an independent evaluation concluded that “the initial investment
in capacity building had proved to be cost effective”.
1. Improving the quality of care for maternal, newborn and child health: implementation
guide for national, district and facility levels. Geneva: World Health Organization; 2022
(https://iris.who.int/handle/10665/353738).
2. Global meeting of the Network for Improving Quality of Care for Maternal, Newborn
and Child Health. 14–16 March 2023, Accra, Ghana. Meeting report. Geneva: Network
for Improving Quality of Care for Maternal, Newborn and Child Health; 2023 (https://
qualityofcarenetwork.org/knowledge-library/report-third-global-meeting-network-
improving-quality-care-maternal-newborn-and, accessed 27 November 2024).
3. Mother and Newborn Information for Tracking Outcomes and Results (MoNITOR)
[website]. Geneva: World Health Organization; 2024 (https://www.who.int/groups/mother-
and-newborn-information-for-tracking-outcomes-and-results-(monitor), accessed 27
November 2024).
4. Child Health Accountability Tracking (CHAT) [website]. Geneva: World Health Organization;
2024. (https://www.who.int/groups/child-health-accountability-tracking-technical-
advisory-group, accessed 27 November 2024).
5. Health information system landscape assessment (HISLA): a tool for assessing the
feasibility of collecting, reporting, and using quality of care indicators. Geneva: World
Health Organization; 2025 (https://cdn-auth-cms.who.int/media-aut/docs/default-source/
mca-documents/qoc/hq-2024-01088--web-annex-a.xlsx?sfvrsn=24cf3efd_7)
6. Kruk ME, Gage AD, Arsenault A, Jordan K, Leslie HH, Roder-DeWan S et al. High-quality
health systems in the Sustainable Development Goals era: time for a revolution. The Lancet
Global Health Commission. The Lancet Global Health. 2018;6(11):1196–1252.
7. Quality of Care [website]. Geneva: World Health Organization; 2024 (https://www.who.int/
health-topics/quality-of-care, accessed 27 November 2024).
8. Delivering quality health services: A global imperative for universal health coverage.
Geneva: World Health Organization; 2018 (https://iris.who.int/handle/10665/272465).
9. Kudzma EC. Florence Nightingale and healthcare reform. Nurs Sci Q. 2006.19(1):61–64.
doi:10.1177/0894318405283556.
10. Donabedian A, 1966. Evaluating the quality of medical care. Milbank Q, 83(4), p. 691–729.
11. Donabedian A. The quality of care. How can it be assessed? Archives of Pathology &
Laboratory Medicine. 1997;121(11):1145–50.
12. Juran JM. The quality trilogy: a universal approach to managing for quality. Wilton: Juran
Institute, Inc. 1989
13. Quality of care for maternal and newborn health: a monitoring framework for network
countries. Geneva: Network for Improving Quality of Care for Maternal, Newborn and
Child Health; 2019 (https://cdn.who.int/media/docs/default-source/mca-documents/
qoc/qed-quality-of-care-for-maternal-and-newborn-health-a-monitoring-framework-for-
network-countries.pdf, accessed 27 November 2024).
14. The Network for Improving Quality of Care for Maternal, Newborn and Child Health (Quality
of Care Network) Geneva: World Health Organization; 2024 (https://www.who.int/groups/
Quality-of-care-network, accessed 27 November 2024).
15. Standards for improving quality of maternal and newborn care in health facilities. Geneva:
World Health Organization; 2016 (https://iris.who.int/handle/10665/249155).
104 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
16. Standards for improving the quality of care for children and young adolescents
in health facilities. Geneva: World Health Organization; 2018 (https://iris.who.int/
handle/10665/272346).
17. Standards for improving quality of care for small and sick newborns in health facilities.
Geneva: World Health Organization; 2020 (https://iris.who.int/handle/10665/334126).
18. Langley G, Moen R, Nolan K, Nolan T, Norman C, Provost L. The improvement guide: A
practical approach to enhancing organizational performance. San Francisco, CA: Jossey-
Bas; 2009.
19. Maternal, newborn, child, adolescent health and ageing and quality of care indicator
metadata toolkit [online database]. Geneva: World Health Organization; 2024 (https://
platform.who.int/data/maternal-newborn-child-adolescent-ageing/indicator-toolkit/
adolescent-health-indicators, accessed 19 June 2024.
20. Heywood A, Rohde J. Using information for action. A manual for health workers at facility
level. Pretoria: Equity Project; 2002.
21. Day LT, Ruysen H, Gordeev VS, Gore-Langton GR, Boggs D, Cousens S et al. “Every Newborn–
BIRTH” protocol: observational study validating indicators for coverage and quality of
maternal and newborn health care in Bangladesh, Nepal and Tanzania. J Glob Health.
2019;9(1): 010902. doi:10.7189/jogh.09.010902.
22. Toolkit on monitoring health systems strengthening. Health information systems. World
Health Organization; 2008 (https://www.who.int/publications/m/item/health-information-
systems, accessed 27 November 2024).
23. Terms related to HIS. Health Information Systems Strengthening Resource Center
[website]. Chapel Hill: MEASURE Evaluation; 2023 ( https://www.measureevaluation.org/
his-strengthening-resource-center/his-definitions/terms-related-to-his.html, accessed 27
November 2024).
24. Toolkit for analysis and use of routine health facility data: general principles. Geneva:
World Health Organization; 2021 (https://cdn.who.int/media/docs/default-source/world-
health-data-platform/rhis-modules/general-principles-2021–01–21-final.pdf, accessed 27
November 2024).
25. OpenHIE Health management information system (HMIS) [website]. OpenHIE (https://
guides.ohie.org/arch-spec/openhie-component-specifications-1/openhie-health-
management-information-system-hmis, accessed 27 November 2024).
26. Brook E, World Health Organization. The current and future use of registers in health
information systems. Geneva: World Health Organization; 1974 (https://iris.who.int/
handle/10665/36936).
27. Guideline on registry-based studies. Amsterdam: European Medicines Agency; 2021
(https://www.ema.europa.eu/en/guideline-registry-based-studies-scientific-guideline,
accessed 27 November 2024).
28. Privacy framework. Washington DC: National Institute of Standards and Technology, U.S.
Department of Commerce; 2020 (https://www.nist.gov/privacy-framework, accessed 27
November 2024).
29. MacFeely S. In search of the data revolution: Has the official statistics paradigm shifted?
Stat J IAOS. 202;36(4):1075–94. doi:10.3233/SJI-200662.
30. SDMX Content-oriented guidelines. Metadata common vocabulary. Statistical Data and
Metadata eXchange; 2006 (https://sdmx.org/wp-content/uploads/Content_04_Draft_
Guidelines_Metadata_Common_Vocabulary-MARCH-2006–1.pdf, accessed 27 November
2024).
31. Elmasri R, Navathe SB. Fundamentals of database systems. 7th ed. Harlow: Pearson; 2016.
32. Rowley J. The wisdom hierarchy: representations of the DIKW hierarchy. J Info. Sci.
2007;33(2):163–180.
33. Framework and standards for country health information systems (2nd ed). Geneva: World
Health Organization; 2008 (https://iris.who.int/handle/10665/43872).
References 105
34. Every Newborn – Measurement improvement for newborn and stillbirth indicators
(EN-MINI-PRISM) tools for routine health information systems. Chapel Hill: Date for Impact;
2023 (https://www.data4impactproject.org/wp-content/uploads/2023/04/EN-MINI-PRISM-
Tools-2.0_TL-23–102-D4I_508.pdf, accessed 27 November 2024).
35. Analysis and use of health facility data: guidance for maternal, newborn, child and
adolescent health programme managers. Geneva: World Health Organization; 2023
(https://iris.who.int/handle/10665/373826).
36. Amayo N A, 2024. Approaches to and experiences in standardizing health facility data
collection and reporting tools in Kenya. Presented at: World Health Organization Global
consultation on the standardization of health facility data capture and reporting forms for
maternal, newborn, and child health, and links to home-based records (Geneva, 17–19
September 2024).
37. How to Guide for quality improvement. Johannesburg: The Aurum Institute; 2019 (https://
online.fliphtml5.com/hgjjt/nchh/, accessed 27 November 2024).
38. Brady PW, Tchou MJ, Ambroggio L, Schondelmeyer AC, Shaughnessy EE. Displaying and
Analyzing Quality Improvement Data. J Pediatric Infect Dis Soc. 2018 May 15;7(2):100–103.
doi:10.1093/jpids/pix077.
39. Perla RJ; Orovost LP; Murray SK. The run chart: a simple analytical tool for learning
from variation in healthcare processes. BMJ Qual Saf. 2011;20:46–51. doi:10.1136/
bmjqs.2009.037895.
40. Provost P L; Murray K S. The health care data guide: Learning from data for improvement.
2nd ed. Hoboken: Jossey-Bass; 2022.
41. Tips and tools for learning improvement. Chevy Chase: Applying Science to Strengthen and
Improve Systems (ASSIST) Project/United States Agency for International Development;
2017 (https://www.urc-chs.com/wp-content/uploads/urc-assist-tips-tools-learning-
improvement.pdf, accessed 27 November 2024).
42. Whitley E; Ball J. Statistics review 1: Presenting and summarising data. Crit Care. 2002;6(1):
66–71. doi:10.1186/cc1455.
43. Run Chart Part 2: Interpretation of run chart data. St Leonards: Clinical Excellence
Commission; 2021 (https://youtu.be/UvitAnmnx6I?si=e-sQ9abCDzJSW-eX, accessed 27
November 2024).
44. Dehlendorf C, Henderson JT, Vittinghoff E, Steinauer J, Hessler D. Development of
a patient-reported measure of the interpersonal quality of family planning care.
Contraception. 2018;97(1):34–40. doi:10.1016/j.contraception.2017.09.005.
45. Bietsch K, Sonneveldt E. What does the Method Information Index tell us about quality of
service? Glastonbury: Avenir Health; 2020 (https://www.track20.org/download/pdf/MII_
Poster_Portrait_101418.pdf, accessed 27 November 2024).
46. Data quality assurance: module 1: framework and metrics. Geneva: World Health
Organization; 2022 (https://iris.who.int/handle/10665/366086).
47. Data quality assurance: module 2: discrete desk review of data quality. Geneva: World
Health Organization; 2022 (https://iris.who.int/handle/10665/365642).
48. Data quality assurance: module 3: site assessment of data quality: data verification
and system assessment. Geneva: World Health Organization; 2022 (https://iris.who.int/
handle/10665/365643).
49. District data quality assurance: a training package for monthly use of DHIS2 data quality
dashboards at district and health facility levels. Geneva: World Health Organization; 2022
(https://iris.who.int/handle/10665/365745).
50. AbouZahr C, Boerma T, Hogan D. Global estimates of country health indicators: useful,
unnecessary, inevitable? Glob Health Action. 2017 ;10(sup1):1290370. doi:10.1080/1654971
6.2017.1290370.
51. Stevens GA, Alkema L, Black RE, Boerma JT, Collins GS, Ezzati M, et al. Guidelines for
accurate and transparent health estimates reporting: the gather statement. Lancet.
2016 ;388:e19–e23.
106 Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
52. Byass P, de Courten M, Graham WJ, Laflamme L, McCaw-Binns A, Sankohet OA, et
al. Reflections on the global burden of disease 2010 estimates. PLoS Med. 2013;
10(7):e1001477. doi:10.1371/journal.pmed.1001477.
53. Alegana VA, Okiro EA, Snow RW. Routine data for malaria morbidity estimation in Africa:
challenges and prospects. BMC Medicine. 2020;18(121). doi:10.1186/s12916–020–01593-y.
54. Goldhill DR, Summer A. Data accuracy and outcome prediction. Anaesthesia.
1998;53(10):937–943. doi:10.1046/j.1365–2044.1998.00534.x.
55. Lorenzoni L, Da Cas R, Aparo UL. The quality of abstracting medical information from the
medical record: The impact of training programmes. Int J Qual Health C. 1999;11(3):209–
213. doi:10.1093/intqhc/11.3.209.
56. Horbar JD, Leahy KA, 1995. An assessment of data quality in the Vermont-
Oxford Trials Network database. Control Clin Trials. 1995;16(1):51–61.
doi:10.1016/0197–2456(94)00019-y.
57. Arts DGT, De Keizer NF, Scheffer GJ, 2002. Defining and improving data quality in medical
registries: a literature review, case study, and generic framework. J Am Med Inform Assoc.
2002;9:600–611. doi:10.1197/jamia.m1087.
58. Kumah A, Nwogu CN, Issah A-R, Obot E, Kanamitie DT, Sifa JS, et al. Cause-and-effect
(fishbone) diagram: A tool for generating and organizing quality improvement ideas. Glob J
Qual Saf Healthc. 2024;7(2):85–87.
59. Serrat O, The ‘Five Whys’ technique. In: Knowledge solutions: tools, methods and
approaches to drive organizational performance. Singapore: Springer; 2017.
60. Sammut PR, Bonnici T. Pareto Analysis. In: Wiley Encyclopedia of Management. Hoboken:
John Wiley & Sons; 2015.
61. Five key lessons on building improvement capability. London: The Health Foundation; 2015
(https://www.health.org.uk/newsletter-feature/five-key-lessons-building-improvement-
capability, accessed 27 November 2024).
62. Improving the quality of care for mothers, newborns and children in health facilities:
learner manual. Version 3. New Delhi: World Health Organization. Regional Office for South-
East Asia (https://iris.who.int/handle/10665/331665).
63. Improving care of mothers and babies. A guide for improvement teams. [Asia Version].
Survive & Thrive; 2016 (https://www.healthynewbornnetwork.org/hnn-content/uploads/
Improving-Care-of-Mothers-and-Babies_Asia-Version_Eng.-2016.pdf, accessed 27
November 2024).
64. Improving Care of Mothers and Babies A guide for improvement teams. [Africa English
Version]. Survive & Thrive; 2016 (https://www.urc-chs.com/wp-content/uploads/urc-assist-
improving-care-mothers-babies-en-africa.pdf, accessed 27 November 2024).
65. Améliorer les soins des mères et des nouveau-nés. Un guide pour les équipes
d’amélioration. [French version]. Survive & Thrive; 2016 (https://www.urc-chs.com/
wp-content/uploads/urc-assist-improving-care-mothers-babies-fr.pdf, accessed 27
November 2024).
66. Coaching guide. New Delhi: Point of Care Quality Improvement (https://www.pocqi.org/
wp-content/uploads/2018/07/Coaching-guide.pdf, accessed 27 November 2024).
References 107
108
Annex. Detailed metadata for core MNCH QoC indicators
Core maternal and newborn QoC indicators mapped to the QoC standards
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
1 Institutional Number of maternal Outcome/ Number of maternal Number of births Type of health Routine health Monthly
maternal deaths prior to Impact deaths prior to discharge (live & stillbirths) in facility management
mortality ratio discharge per 100 000 the health facility information
births (live and during the reporting system
stillbirths) period
2 Institutional Percentage of women Outcome/ Number of women Number of women Direct and Routine health Monthly
obstetric case who delivered at Impact who delivered at the who delivered at the Indirect management
fatality rate the facility and facility and experienced facility during the Type of health information
experienced obstetric obstetric complications reporting period facility system
complications (regardless of time of
(regardless of time of onset) and died from
onset) and died from these complications
these complications before discharge
before discharge
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
3 Pre-discharge Percentage of babies Outcome/ Number of babies born Number of babies Cause Routine health Monthly
neonatal born live in a facility Impact live in a facility who die born live in a facility Type of health management
mortality rate who die prior to prior to discharge from during the reporting facility information
(disaggregated discharge the facility (up to 28 days period system
by cause) of completed life), per
1000 live births in a given
year or period. This
excludes re-admissions
for illness.
4 Institutional Percentage of Outcome/ Number of babies Number of babies Antepartum or Routine health Monthly
stillbirth rate total institutional Impact delivered in a facility born in the facility Intrapartum management
stillbirths among all with no signs of life and (live and stillbirth) Type of health information
institutional deliveries born weighing at least during the reporting facility system
1,000 grams or after 28 period
weeks of gestation, per
1000 births (alive or dead
at birth)
5 Immediate Percentage of Process Number of women Number of women Type of health Routine health Monthly
administration women who gave who gave birth in a who gave birth in the facility management
of a uterotonic birth in a health facility who received a facility during the information
after birth for facility who received prophylactic uterotonic reporting period system
postpartum a prophylactic immediately after birth
haemorrhage uterotonic (ideally within one
prevention immediately after minute) for prevention
birth (ideally of postpartum
within one minute) haemorrhage
for postpartum
haemorrhage
prevention
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
10 Companion of Percentage of women Outcome Number of women Number of women Type of health Client exit Quarterly
choice during who wanted and who wanted and had a interviewed who facility interviews
labour and had a companion of companion supporting wanted a companion
childbirth choice supporting them during labour and during labour and
them during labour childbirth in the health childbirth who
and childbirth in the facility delivered at the
health facility facility during the
reporting period
11 Physical abuse Percentage of women Outcome Number of women Number of women Type of health Client exit Quarterly
during labour, who reported being who report physical interviewed who facility interviews
or childbirth physically abused abuse during labour or delivered at the
or postpartum anytime during childbirth facility during the
period labour, childbirth, or reporting period
postpartum period in
the health facility
(Physical abuse:
slapped, pinched, or
punched by a health
worker or other
facility staff)
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Core paediatric and young adolescent QoC indicators mapped to the QoC standards
113
114
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
3 Assessment Percentage of sick Process/ Number of sick Number of sick Type of health Patient medical Monthly
for the sick children < 5 years old Output children < 5 years old children < 5 years facility records
children who were assessed who were assessed old who visited the Age
< 5 years old in the health facility based on key IMNCI health facility during (0 – <2 month,
based on the based on key integrated assessment criteria. reporting period 2 month –
integrated management of <5 years)
management newborn and childhood Sex
of newborn Illness (IMNCI) criteria
and childhood (Presence or absence
Illness criteria of danger signs (ability
to drink or breastfeed;
vomits everything;
convulsions, lethargy)
and received rapid
physical and clinical
assessment including,
weight, Z-score or
MUAC, respiratory rate,
temperature, cough,
difficult breathing/chest
indrawing, diarrhoea/
dehydration status,
vaccination status, and
palmar pallor)
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
4 Treatment Percentage of young Process/ Number of young Number of sick young Sex Patient medical Monthly
of possible infants (<2 months old) Output infants (<2 months infants (<2 months Weight cut-offs records
severe bacterial classified as having old) classified as old) classified as (<2000g, ≥2000g)
infection at possible severe bacterial having PSBI or any having PSBI or any
outpatient level infection (PSBI) or signs child with related signs child with related
of PSBI, or very severe or very severe disease signs or very severe
disease or sepsis – or sepsis, who were disease or sepsis who
who were prescribed prescribed appropriate visited health facility
appropriate antibiotics antibiotics according during reporting
according to WHO to WHO guidelines period
guidelines.
(Signs of PSBI: Movement
only when stimulated
or no movement
at all, not feeding
well on observation,
temperature greater
than or equal to 38°C
or less than 35.5°C,
severe chest in-drawing,
convulsions, fast
breathing (60 breaths
per minute or more) in
infants less than 7 days
old)
5 Kangaroo Percentage of newborns Process/ Number of newborns Number of newborns Type of health Routine health Monthly
mother care weighing ≤ 2,000g Output weighing ≤ 2,000g weighing ≤ 2,000g facility management
for newborns who are initiated on who are initiated on who were born in Sex information
weighing 2000 g Kangaroo mother care Kangaroo mother care or presented to the Weight system
115
guidelines WHO guidelines reporting period 1500 – <2000g)
116
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
6 Pneumonia Percentage of children Process/ Number of children Number of children Sex Routine health Monthly
treatment aged between 7 days Output aged between 7 days aged between 7 days Type of health management
with 1st choice and 5 years who were and 5 years who and 5 years seen in facility information
antibiotic for prescribed amoxicillin were diagnosed with the health facility with Age (7–59 days, system
children aged for treatment of pneumonia or showed pneumonia or fast 2 month –
between 7 days pneumonia signs of fast breathing breathing and/or chest <5 years)
and 5 years and/or chest indrawing indrawing during
and were prescribed reporting period
oral amoxicillin
7 Management Percentage of children Process/ Number of children Number of children Age (0–59 days, Routine health Monthly
of acute watery < 5 years old diagnosed Output < 5 years who were < 5 years old with a 2 months – management
diarrhoea with acute watery diagnosed with acute diagnosis of acute <5 years) information
among children diarrhoea in a health watery diarrhoea and watery diarrhoea who Sex system
<5 years old facility who received received appropriate visited health facility Type if health
appropriate treatment treatment for facility
for diarrhoea [Oral diarrhoea
rehydration solution
(ORS) and Zinc
supplementation if
2 months- <5 years]
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
8 Children Percentage of children Process/ Number of children Number of Type of health Routine health Monthly
and young and young adolescents Output and young adolescents children and facility management
adolescents (< 15 years old) in (< 15 years old) in young adolescents Sex information
with malaria endemic malaria endemic (< 15 years old) in Diagnosis system
documented areas who presented areas who presented malaria endemic Age (<1 year,
malaria test to the health facility to the health facility area who visited the 1 – < 5 years,
results with fever and their with fever and their health facility with 5 – 9 years,
malaria test results are malaria test results are fever during reporting 10–14 years)
available (results from available period
microscopy or malaria
Rapid Diagnostic Test).
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
13 Catch up Percentage of children Process/ Number of children Number of children Sex Routine health Monthly
immunization < 2 years of age eligible Output < 2 years of age eligible < 2 years of age Age (<1 year, management
for children for DTP-Hep-B-HIB, IPV, for DTP-Hep-B-HIB, eligible for DTP-Hep-B- 1-<2 years) information
< 2 years old RTV, PCV or Measles- IPV, RTV, PCV and HIB, IPV, RTV, PCV and Type of antigen system
containing vaccine, who measles-containing measles-containing
received all catch up vaccine who were vaccine who received
immunization during administered all catch medical care in the
medical visits (Eligibility: up immunization health facility during
unvaccinated or partially reporting period
vaccinated with these
vaccines according to
their age and national
immunization schedule)
14 Inappropriate Percentage of children Process Number of children Number of Type of health Patient medical Monthly
use of and young adolescents and young adolescents children and young facility records
antibiotics for (<15 years old) seen in (<15 years old) seen in adolescents (<15 years Sex
cough or cold a health facility with a health facility with old) seen in a health Diagnosis
in children a cough or cold, or an a cough or cold, or an facility during the
and young unspecified respiratory unspecified respiratory reporting period
adolescents tract infection, and tract infection, and with a cough or cold,
without a comorbidity without a comorbidity or an unspecified
requiring antibiotic requiring antibiotic respiratory tract
treatment (e.g., treatment (e.g., infection, and without
pneumonia, severe pneumonia, severe a comorbidity
pneumonia, severe pneumonia, severe requiring antibiotic
acute malnutrition, very acute malnutrition, treatment (e.g.,
severe disease, sepsis, very severe disease, pneumonia, severe
meningitis, dysentery, sepsis, meningitis, pneumonia, severe
cholera, HIV+) who were dysentery, cholera, acute malnutrition,
119
antibiotic. dysentery, cholera,
HIV+)
120
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
15 Completion Percentage of children Input Number of children Number of Type of health Health facility Monthly
of medical and young adolescents and young adolescents children and young facility register
documentation (<15 years old) seen in (<15 years old) seen in adolescents (<15 years Sex
for children a health facility with a health facility with old) seen in a health Age groups
and young complete key patient complete key patient facility during (<1 year,
adolescents information in the information in the reporting period 1 – < 5 years,
health facility register health facility register 5 – 9 years,
(patient demographic 10–14 years)
data, assessment
findings, classification
/ diagnosis, treatment,
counselling, and care
outcomes)
16 Quality of care Percentage of health Process/ Number of health Number of facilities Type of health Survey Quarterly
data reviews facilities that conducted Output facilities that assessed during the facilities
for children monthly quality of care conducted monthly reporting period
and young data reviews for patients quality of care data
adolescents under 15 years old in the reviews for patients
past 3 months under 15 years old in
the past 3 months
17 Knowledge and Percentage of children Process/ Number of children children and young Respondent Facility survey, Quarterly
understanding and young adolescents Output and young adolescents adolescents (<15 years Service level client exit
of the condition (<15 years old) or their (<15 years old) or their old) or their caregivers (inpatient, interviews,
and treatment caregivers who can caregivers who can who were interviewed outpatient) or similar
plan among describe the child’s describe the child’s during reporting Type of health assessments
children condition and how condition and how period facility
and young to administer home to administer home
adolescents or treatment treatment
their caregivers
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
18 Satisfaction Percentage of children Outcome Number of children Number of Type of health Facility survey, Quarterly
with decision- and young adolescents (patient- and young adolescents children and young facilities client exit
making process (<15 years old) or reported) (<15 years old) or adolescents (<15 years Respondent interviews,
for care their caregivers who their caregivers who old) or their caregivers Health condition or similar
are satisfied with the are satisfied with the who were interviewed assessments
care decision-making care decision-making during reporting
process process period
19 Pre-discharge Percentage of caregivers Process/ Number of caregivers Number of caregivers Type of health Facility survey, Quarterly
counselling on of children under 5 years Output of children under of children < 5 years facility client exit
danger signs old who are aware of 5 years old who are who received care and interviews,
and feeding the danger signs of aware of the danger were interviewed in or similar
for children paediatric illness, when signs of paediatric health facility during assessments
< 5 years old to seek care, and how to illness, when to reporting period
manage feeding during seek care, and how
illness to manage feeding
during illness
20 Awareness of Percentage of children Process/ Number of children Number children and Respondent Facility survey, Quarterly
child rights and young adolescents Output and young adolescents young adolescents Age groups client exit
during health (<15 years old) or their (<15 years old) or (<15 years old) or (<1 year, interviews,
care caregivers who reported their caregivers their caregivers were 1 – < 5 years, or similar
being adequately who reported being interviewed during 5 – 9 years, assessments
informed about their adequately informed reporting period 10–14 years)
rights to care (e.g., free about their rights to Type of health
treatment, medication, care facility
food, bedding, room-in
etc.)
Measuring and monitoring quality of care to improve maternal, newborn, child and adolescent health services
Indicator Proposed Proposed data
# Indicator name Indicator definition classification Numerator Denominator disaggregation source Proposed
23 Access to play Percentage of children Input Number of children (or Number of children Age groups Survey or Quarterly
and educational (or their caregivers) their caregivers) who treated as inpatient (<1 year, interview
material during who reported that the reported that the child or their caregiver who 1 – < 5 years, records
hospitalization child was able to play was able to play and were interviewed in 5 – 9 years,
and access educational access educational health facility during 10–14 years)
materials during material during reporting period Type of health
hospitalization hospitalization facility
24 Clinical Percentage of health Input Number of health Number of health Provider cadre, Provider Quarterly
mentorship workers providing workers providing workers providing care Facility type interviews
or training care for children care for children for children who were
for childcare who received clinical who received clinical interviewed during
providers mentorship or training mentorship or training reporting period
in the last 6 months in the last 6 months
25 Stock out Number of days in the Input Total number of days N/A Inpatient / Inventory of the Quarterly
of essential past 3 months when with stock outs of at outpatient pharmacy or
child health there were stock outs least three essential dispensary
medicines of at least 3 essential medicines
children medicines
(amoxicillin, injectable
gentamicin, and zinc)