“More Evidence, Less Poverty.” This is Innovations for Poverty Action’s official slogan. Its objective is to determine which poverty-reduction interventions work, which don’t, and which are the most cost-effective. IPA accomplishes this task through implementing rigorous “impact evaluations.” Specifically, the organization conducts Randomized Control Trials (RCTs) – the only method of accurately determining the causal impact of a program. The defining characteristic of a RCT is the random assignment of a sample of the target demographic into different treatment arms or interventions. Subsequently, the relative success of a particular program is measured through differences in chosen desirable outcomes, ranging from income to health, between these treatment arms. The data necessary to evaluate the impact of a given program are collected through multiple rounds of surveys administered to the participants of the study.
IPA usually partners with local NGO’s to examine a specific intervention that has already been implemented, but hasn’t been evaluated with an RCT. For the project I worked on, IPA partnered with a Filipino NGO called International Care Ministries (ICM), to examine one of their programs to combat poverty, called Transform. The Transform program targeted ultra-poor Filipinos using a multidisciplinary approach with 3 different components: a Values program that taught Christian values; a Livelihood program that helped educate people about how to create small businesses and better provide for themselves; a Health program that taught basic health practices. ICM provides this program to tens of thousands of Filipino people per year, and we wanted to measure its efficacy.
To measure the efficacy of programs, IPA hires local employees and trains them to ask survey questions in a very specific and nonbiased way. After a round (or several rounds) of surveying have been completed, the survey answers are allocated and then run through different statistical software. After comparing different treatment arms to each other, researchers can sometimes draw conclusions about the effects of different programs and the programs’ components. But it isn’t as simple as just collecting your data and then running regressions: collecting high quality and accurate data, or data that really express as close to the truth of the situation on the ground as possible, is extremely difficult, and requires a lot of different checks that are too long and complicated to discuss here.
My duties ranged widely from data entry, to editing and suggesting corrections to the survey, to training the six field staff who I was supervising how to conduct the survey and submit necessary documentation. It was challenging to manage all of the different employees under my base: balancing personalities, tailoring different trainings to different people’s needs, and a mid-week series of emergency interviews for the recently open position. But, it was one of the best educational experiences I’ve ever had.