Note: This page was created using content published by Good Ventures and GiveWell, the organizations that created the Open Philanthropy Project, before this website was launched. Uses of “we” and “our” on this page may therefore refer to Good Ventures or GiveWell, but they still represent the work of the Open Philanthropy Project.
This page provides a high-level description of the Service Delivery Indicators Program (SDI), to which Good Ventures granted $500,000 in 2013 based on a joint assessment with GiveWell. The program has an estimated budget of $27 million over the first five years, of which other funders had committed about $10 million as of January 2013.1
Funding and following the project is a learning opportunity for GiveWell and Good Ventures. We will be continuing our analysis of the project; below we report on what we have learned to date. Key remaining questions include:
- Strength of the indicators: How accurately and consistently will the indicators measure overall service quality? How vulnerable are they to ‘gaming’ by sufficiently motivated governments, service providers, or survey staff?
- Usage of the indicators: Who will use the indicators? What impact will the existence of the indicators have?
- Scale up: What factors might prevent SDI from scaling up to 10 – 15 countries on schedule?
We spoke to Gayle Martin about updates on SDI on July 30, 2014
About the program
The Service Delivery Indicators Program (SDI) was designed to create and promote the use of objective measures of the quality of health and education services in Africa. According to the World Bank, which helped create SDI, “No set of indicators is available for measuring service delivery and quality at schools and clinics from the citizens’ perspective.”2 By creating metrics for health and education services that are consistent within and among countries, SDI hopes to increase attention, measurability, and accountability for the success of those services, leading to long-term improvements in health and education.3
SDI will assess 10 – 15 countries in Africa on each of the indicators based on a random sample of health and primary school facilities.4 The countries will be selected to maximize the impact of the program.5 The indicators are designed to measure the availability of important health and education inputs as well as the knowledge and effort of the service providers.6 The intention is to “provide a useful snapshot of actual performance as well as possible constraints that may undermine the delivery of quality services.”7 The hope is that, long term, the existence and usage of the indicators will help improve health and education in the participating countries.8
Program methodology
Data collection
Each of the 10 – 15 countries will be assessed at regular intervals of about two – three years, staggered so that a similar number of countries are assessed each year.9 Each time a country is assessed, SDI will select a sample of about 200-300 primary health facilities and about 200-300 primary schools to be evaluated.10 The sample size and survey design are chosen “with the aim of producing nationally representative indicators with sufficient precision to identify changes in the indicators of around 5-7 percentage points over time.”11
The indicators (listed below) will be measured by enumerators that travel in teams of two to the selected facilities and directly observe the resources available to teachers and health workers, observe the quality of their services, and administer tests of their relevant knowledge.12 The enumerators will be employed by an organization operating in that country to implement SDI, assisted by the World Bank,13 and will be trained and managed by survey supervisors within the same organization.14 The survey supervisors will also be responsible for quality assurance, including spot checks completed by field coordinators and supervision visits with the enumerators.15 We are not aware of plans to conduct independent audits of the data.16
The following explanations of the indicators are based on the SDI Defintions 2013 which reflects updates to the indicators based on experience in Kenya.
Education indicators
SDI will measure seven indicators in primary schools: three measures of input availability at schools and four measures of the knowledge and effort of teachers.17
School inputs:
- Minimum teaching equipment, measured as the average of the following in 4th grade classrooms: the fraction of students with pens, the fraction of students with notebooks, and the existence (“1” for existent, “0” for nonexistent) of a functional chalkboard.18
- Textbooks per student, measured as the average number of math and language books in 4th grade classrooms per student.19
- School infrastructure, measured as a fraction representing classroom light sufficient for reading and toilets that are accessible, functioning, clean, and private.20
Teacher efficacy:
- Absence from school, based on the attendance of a random sample of up to 10 teachers during a later unannounced visit.21
- Absence from classroom, based on the location of teachers at the school during an unannounced visit.22
- Share of teachers with minimum knowledge, based on a test measuring math and English knowledge from the curriculum, given to teachers of those subjects.23
- Time spent teaching in the classroom, based on direct observation, including, for example, interacting with students, grading students’ work, having students work on a specific task, and maintaining discipline, but not working on private matter, doing nothing, or leaving the classroom altogether.24
Health indicators
SDI will measure eight indicators in primary health facilities; four measures of input availability, and four measures of the knowledge and effort of health workers.
Health facility inputs:
- Equipment availability, a binary measure counted as “1” if the health facility has at least one functioning thermometer, stethoscope, sphygmonometer, and weighing scale, plus a refrigerator and sterilization equipment for larger facilities, and as “0” otherwise.25
- Drug availability, measured as the share of 26 specific drugs that are in stock and not expired on the day of observation.26
- Caseload per health provider, measured as number of outpatient visits divided by number of days the facility was open and the number of health workers who conduct outpatient consultations in the prior three months.27
- Health facility infrastructure, a binary measure counted as “1” if the facility has electricity, water, and sanitation, and as “0” otherwise.28
Health worker efficacy:
- Absence rate, based on the absence of a random sample of up to 10 on duty health workers.29
- Diagnostic accuracy, measured as the fraction of five hypothetical patient case scenarios for which prescribers are able to mention the correct diagnosis.30
- Management of maternal and neonatal complications, measured as the fraction of total relevant treatment actions proposed during a case study of post-partum hemorrhage and a case study of neonatal asphyxia.31
- Adherence to clinical guidelines, measured as the average fraction of history taking questions and examination questions that were asked by a clinician for each of five case study patients.32
Data dissemination and capacity building
Once the data are collected, SDI will encourage it to be used by institutions and the public within the country and internationally to maximize its impact.33 SDI also plans for the program to build the capacity of the participating countries to conduct surveys and analyze their results.34
Evaluating program success
Strength of the indicators
Measuring these indicators is an ambitious attempt to collect important data from a wide range of environments. With the information we currently have, it is difficult to know:
- How consistently the indicators will be defined and measured among locations and over time
- How representative these indicators are of overall service quality, if accurately measured
- How vulnerable the indicators are to gaming by sufficiently motivated governments, service providers, and survey staff
- How burdensome the survey process is for service providers and facilities. SDI reports that, as of September 2013, no individual has been interviewed for more than two hours at a time.35
Usage of the indicators
SDI plans to monitor the program’s success using two intermediate indicators: 36
- “Public debate on education and health service delivery [is] initiated and/or informed.”
- “Stakeholders (policymakers, media, NGOs, CSO) reporting use of SDI analysis within 6 months after any of the SDI dissemination events.”
Scale up
SDI’s success at implementing and disseminating the indicators on schedule can be tracked. As of September 2013, SDI had already implemented a highly visible launch with Kenya’s data in July, and had plans to finish implementing launches with four more countries by the end of 2013.37 In one of those countries, Nigeria, SDI will only measure the health indicators in six states and the education indicators in four states due to the large size of the country.38
We know that the project intends to assess each of 10 – 15 countries at regular intervals of about two – three years, staggered so that a similar number of countries are assessed each year.39 By the end of the third full year, 2015, we will know how its progression compares to this vision,40 though we also hope to understand in more detail the anticipated obstacles to scale-up.
Updates
In March 2015, we published our first update on SDI’s progress to date.
Sources
DOCUMENT | SOURCE |
---|---|
Gayle Martin, World Bank Senior Economist, call with GiveWell on April 17, 2013 | Unpublished |
Gayle Martin, World Bank Senior Economist, email with GiveWell on September 9, 2013 | Unpublished |
SDI definitions 2013 | Source |
SDI Implementation Update January 2013 | Source |
SDI Kenya Education Survey Instrument 2012 | Unpublished |
SDI Kenya Health Survey Instrument 2012 | Unpublished |
SDI Program Document 2011 | Source |