Онлайн
библиотека книг
Книги онлайн » Разная литература » Позитивные изменения. Том 2, № 2 (2022). Positive changes. Volume 2, Issue 2 (2022) - Редакция журнала «Позитивные изменения»

Шрифт:

-
+

Закладка:

Сделать
1 ... 38 39 40 41 42 43 44 45 46 ... 60
Перейти на страницу:
met, then only the existence of the program under study will account for any differences in the outcome (Y) between the two groups.

Instead of considering the impact solely for one person, it is more realistic to consider the average impact for a group of people (Figure 1).

It is important to consider what happens if we decide to proceed with the evaluation without finding a comparison group. We may run the risk of making inaccurate judgments about program outcomes, in particular with regard to counterfactual evaluations.

Such a risk exists when using the following approaches:

• Before-and-after comparisons (also known as reflexive comparisons): comparing the outcomes of the same group prior to and subsequent to the introduction of a program.

• With-and-without comparisons: comparing the outcomes in the group that chose to enroll with the results of the group that chose not to enroll.

A before-and-after comparison attempts to establish the impact of the program by tracking changes in outcomes for program participants over time. In essence, this comparison assumes that if the program had never existed, the outcome (Y) for program participants would have been exactly the same as their situation before the program. Unfortunately, in the vast majority of cases that assumption simply does not hold.

Consider, for example, the evaluation of a microfinance program for rural farmers. The program provides farmers with microloans to help them buy fertilizer to increase rice production. You observe that in the year before the start of the program, farmers harvested an average of 1,000 kilograms (kg) of rice per hectare. (Point B in Figure 2).

The microfinance scheme is launched, and a year later rice yields have increased to 1,100 kg per hectare. (Point A in Figure 2). If you try to evaluate impact using a before-and-after comparison, you have to use the pre-intervention outcome as a counterfactual. Applying the basic impact evaluation formula, you would conclude that the scheme had increased rice yields by 100 kg per hectare. (A-B)

However, imagine that rainfall was normal during the year before the scheme was launched, but a drought occurred in the year the program started. Because of the drought, the average yield without the microloan scheme would have been lower than В: say, at level D. In that case, as the before-and-after comparison assumes, the true impact of the program would have been A-D, which is larger than 100 kg.

Rainfall was one of many external factors that could have influenced the outcome of interest (rice yield) of the scheme over time. Similarly, many of the outcomes that development programs aim to improve, such as income, productivity, health or education, are affected by multiple factors over time. For this reason, the preintervention outcome is almost never a good estimate of the counterfactual.

Comparing those who chose to enroll to those who chose not to enroll ("with-and-without") constitutes another risky approach to impact evaluation. The comparison group, which independently chose the program, will provide another «counterfeit» counterfactual estimate. The choice occurs when participation in the program is based on the preferences or decisions of each participant. This preference is a separate factor on which the outcome of participation may depend. It is impossible to talk about the comparability of those who enrolled with those who did not enroll under such conditions.

The HISP pilot evaluation consultants, in their attempts to mathematically understand the results, made both the first and the second mistake in evaluating the counterfactual, but the program organizers, realizing the risk of bias, decided to find methods for a more accurate evaluation.

RANDOMIZED ASSIGNMENT METHOD

This method is similar to running a lottery that decides who is enrolled in the program at a given time and who is not. The method is also known as randomized controlled trials (RCTs). Not only does it give the project team fair and transparent rules for assigning limited resources to equally eligible population clusters, but it also provides a reliable method for evaluating program impact.

"Randomness" applies to a large population cluster having a homogeneous set of qualities. In order to decide who will be given access to the program and who will not, we can also generate a basis for a reliable counterfactual evaluation.

In a randomized allocation, each eligible unit (e.g., individual, household, business, school, hospital, or community) has the same probability of being selected for the program. When there is excess demand for the program, randomized assignment is considered transparent and fair for all participants in the process.

Insert 1 provides examples of the use of randomized distribution in practice.

Insert 1: RANDOMIZED CONTROLLED TRIALS AS A VALUABLE OPERATIONAL TOOL

Randomized assignment can be a useful rule for assigning program benefits, even outside the context of an impact evaluation. The following two cases from Africa illustrate how.

In Côte d'Ivoire, following a period of crisis, the government introduced a temporary employment program that was initially targeted at former combatants and later expanded to youth more generally. The program provided youth with short-term employment opportunities, mostly to clean or rehabilitate roads through the National Roads Agency. Young people in participating municipalities were invited to register. Given the attractiveness of the benefits, many more candidates applied than there were places available. In order to come up with a transparent and fair way of allocating the benefits among applicants, program implementers put in place a public lottery process. Once registration had closed and the number of applicants (say, N) in a location was known, a public lottery was organized. All candidates were invited to a public location, and small pieces of paper with numbers from 1 to N were put in a box. Applicants were then called one by one to come to draw a number from the box in front of all other candidates. Once the number was drawn, it was read aloud. After all applicants were called, someone would check the remaining numbers in

1 ... 38 39 40 41 42 43 44 45 46 ... 60
Перейти на страницу: