What global education can, and can't, learn from global health

What can global education learn from global health? That was the topic of a recent article on the Center for Education Innovations website (the content, of which, I later discovered, was largely drawn from a roundtable in New York back in September with the same title). “I disagree with approximately all of this article,” I Tweeted snarkily after reading it. But Lee Crawfurd told me I couldn’t throw shade without explaining myself, so here goes.

First, the notion that “the sheer scale of the problem” somehow sets global education apart from global health (or indeed other sectors) strikes me as a strange starting point in thinking about what we can learn. For one thing, I’m not sure it’s possible to place completely different societal problems on some common scale of difficulty. But supposing we do agree to treat development as a morbid game of Top Trumps: what are the categories that make education a winner? That “57 million children around the world do not go to school” is indeed a tragedy; that in low income countries 1 in 14 still die before they are even old enough to go to school is too.

Second, the argument that learning is harder to measure than health outcomes, or can only be measured over decades, baffles me. We have some immensely reliable measures of children’s learning. They are called standardized tests. And before you say it, no I don’t think that everything that’s important about education can be captured in a standardized test. I do think that some important things can be captured in a standardized test, particularly where they are low stakes for the student and particularly where baseline levels of learning are extremely low, as they are in the countries worst affected by the global learning crisis. Nor is it impossible to see important changes in learning outcomes occur in relatively short periods of time. Take Partnership Schools for Liberia: whatever your views on the programme, it’s clear from the midline report of the RCT that it achieved significant gains in learning outcomes in a single academic year. Indeed, if you’re nerdy enough to read the fine print of the midline report, you'll see that the evaluators found some learning gains could already be detected in the early weeks of the programme.* As Lant Pritchett has argued, development is a complex modernization process, some features of which – like good governance and pluralism and human rights – really are hard to measure precisely. Education is not one of them.

While we’re on assessment, let’s dispense with two of the other arguments offered here: first, the idea that assessment methods in well-developed education systems inhibit the development of higher-order thinking skills is debatable, but I’m happy to concede there’s a discussion to be had there. But when it comes to the countries bearing the brunt of the global learning crisis, I’ve seen no evidence to suggest that their assessment systems – as problematic as they may be in some instances – are really the binding constraint to addressing the problem of shockingly low learning levels. If you don’t believe me, read Tessa Bold and co’s brilliant study of teacher quality and behaviour across seven countries covering 40% of Africa’s school age population and ask whether any of those findings would be ameliorated by a different assessment system. Second, can we please stop saying that the problem is “we’re teaching kids what was useful 100 years ago”? The problem is too often we’re not teaching kids what was useful 100 years ago, and would still be useful to them today, and for that matter will still be useful in 25 years even when we’re all out of a job and our snarky blogposts are being automatically generated by backflipping AI robots.

Third, I’m sceptical that the issue is a lack of awareness about “best practices” on the part of policy-makers or local implementers. I come back to Tessa Bold’s work: the puzzle is not that policy-makers are doing the ‘wrong’ things so much as that they are doing the ‘right’ things and finding that they don’t work:

"Over the last 15 years, more than 200 randomized controlled trials have been conducted in the area of education. However, the literature has yet to converge to a consensus among researchers about the most effective ways to increase the quality of primary education, as recent systematic reviews demonstrate. In particular, our findings help in understanding both the effect sizes of interventions shown to raise students’ test scores, and the reasons why some well-intentioned policy experiments have not significantly impacted learning outcomes. At the core is the interdependence between teacher effort, ability, and skills in generating high quality education."

Fourth, I don’t disagree that there are stark differences in the ‘ecosystems’ around global education and global health, though perhaps these differences – as the UK government’s Multilateral Aid Review ratings seem to suggest – are more about quality than quantity.  But the word ‘ecosystem’ suggests a degree of harmony and coherence that masks very real strategic tensions and debates within global health – in particular, between the ‘vertical’ funders like GAVI and the Global Fund and more traditional actors focusing on ‘horizontal’ system-strengthening work. To crudely caricature a big and (very) long-running debate, the narrow focus of the vertical funders and their prioritisation of a few specific capabilities over long-term institution building seem to have reduced the variability of their programming, allowing them to more consistently deliver on their objectives. Critics counter that this has come at a big cost: in the long-term, making countries dependent on constant injections of outside cash and capability rather than building permanent, high-performing institutions that can raise health outcomes for all; in the short-term, neglecting diseases that fall outside their focus area, even if they have a dramatic impact on public health. To take an extreme example, there is no Global Fund for Preventing Road Traffic Accidents, even though in many countries they now kill more people than malaria.

What’s interesting is that the education ecosystem has put its eggs fully in the “systems-strengthening” basket. The model of the Global Partnership for Education, for example, is essentially that developing countries develop an approved sector plan and GPE funds it, in line with development effectiveness principles like 'country ownership'. Of course, international donors also do their own programming, some of it working more directly through government systems and some of it less. But none of this is on anything like the scale of the global health verticals. Can you imagine, for instance, a Global Fund for Early Years Education that took the same mass-scale, vertically integrated approach to fixing the problem of poor quality/non-existent early years’ provision that GAVI has for vaccines? In other words, differences in ecosystems are also differences about strategy, and the question is whether some of the strategies pursued by the global health community are actually available to the global education community.

Finally, there’s the money question. Is under-investment in education a barrier? 100% yes. Is it the barrier? The evidence says probably not: as the WDR notes, "the relationship between spending and learning outcomes is often weak."

More to the point, no one in this debate ever seems to ask why the money isn't flowing in global education the way it has in global health. To invest in raising the quality of education, whether as a domestic policy-maker or an international donor, you presumably have to believe three things: that raising the quality of education is important; that there is a reasonable chance your investment will yield the promised benefits; and that the benefits of the investment outweigh its costs (financial, political or otherwise).

To scan the official reports and Twitter feeds of some of the biggest influential players in the education ‘ecosystem’, you’d think that the first of these three was all that mattered. They are littered with factoids extolling the benefits of education: for the earnings of the educated, for GDP, for health, for women's reproductive rights, for the environment, and so on. The implication of this messaging is that policy-makers and donors don’t yet understand the returns to education. Really? An alternative analysis would be that policy-makers do understand the returns to education, but believe (rightly or wrongly) that the true, risk-adjusted returns are much lower. Perhaps they aren’t confident that any of the policies or programmes at their disposal will actually yield the results they want given the messy reality of implementation. Given the litany of seemingly promising policy interventions found not to work, or found to work but not to scale, that would not be an altogether unreasonable conclusion. Or perhaps they eschew reforms that genuinely would make a difference because they carry an unacceptable political cost. Either way, messages that don't address these types of concerns are likely to fall on deaf ears.

Which brings me to the 'p' word. What’s missing from this article is a proper discussion of politics. As the team at RISE have long argued, and the recent WDR has amplified, the underlying cause of the global learning crisis is that the way education systems are organized, funded and incentivized is not necessarily designed to lead to sustained improvements in the quality of learning (as opposed to, say, the quantity of schooling) - and it's often politics that keeps it that way.

To me, the most interesting question for global education folk to explore with their colleagues in global health is therefore the politics of reform: the reform coalitions and institutional strategies that have enabled such impressive progress in some areas; the reasons why these strategies proved successful in overcoming particular institutional challenges or binding in vested interests or circumventing potential veto players; but also the limits of these strategies to deliver change in other areas where a different political calculus held. 

I’m all in favour of seeing what can be learned from other sectors, but as a wise man once said: if you miss the politics, you miss the point.

 

* See Romero, Sandefur and Sandholz (2017), p 24: "Students in treatment schools score higher at baseline than those in control schools by .076σ in math (p-value=.077) and .091σ in English (p-value=.049). There is some evidence that this imbalance is not simply due to “chance bias” in randomization, but rather a treatment effect that materialized in the weeks between the beginning of the school year and the baseline survey."

 

 

 

 

 

 

 

10 Reflections on the WDR

Last week, the World Bank published its latest World Development Report (WDR), the first dedicated exclusively to the topic of education. Learning to Realize Education's Promise may not be the punchiest title I've ever heard, but it's a really important piece of work.

wdr front cover.jpg

"Schooling is not the same as learning.

Education is an imprecise word, and so it must be clearly defined. Schooling is the time a student spends in classrooms, whereas learning is the outcome—what the student takes away from schooling. This distinction is crucial: around the world, many students learn little."

Here are 10 high level reflections:

1. Not new, but definitive. The very first sentence in the report is "Schooling is not the same as learning". This is not a new claim. Lant Pritchett literally wrote the book on this a couple of years ago; Pauline Rose took to Twitter to express her exasperation that anyone could ever have been presumed to think otherwise. But repetition is an under-appreciated tool in good communications, and often "about the time you get tired of saying it, they are just starting to hear it." In short, I don't expect the value of this report to be in its novelty but in its definitiveness. There's not much in here that hasn't been covered in some RISE paper or other. But the evidence is so exhaustive it should make the learning crisis the point of departure for every conversation about global education. 

2. o-LAY. That said, the report does include a couple of neat concepts I hadn't come across before. One that stood out was the Learning Adjusted Years of Schooling (LAYS). Borrowing (presumably) from the concept of Disability Adjusted Life Years (DALYs) in health, which modify the simple measurement of life expectancy in years by how healthy those years actually are, LAYS account for the fact that the productivity of a year of schooling when it comes to actual learning varies wildly between countries and time periods. Since "years of schooling" still, regrettably, remains such a common metric, this feels like a helpful contribution.

3. Improving education isn't easy but it is simple. I've written elsewhere that the barriers to improving education are not, in themselves, that complicated. The report does a nice job of providing a framework laying out the 'proximate causes' of poor quality learning: unprepared learners, unskilled or unmotivated teachers, weak school governance and management, and misdirected inputs and resources. And, the report notes, we actually know a lot more about addressing some of these issues than we used to thanks to an explosion in the number of high quality impact evaluations. The problem -  the not easy part - is that these proximate causes persist because of deeper, more political challenges.

4. If you miss the politics, you miss the point. This will not be news to my former colleagues, but the report helpfully underlines the importance of understanding the political factors that allow the learning crisis to go unaddressed. Some of these are self-evidently malign things like corrupt practices diverting resources from where they are needed, or patronage allowing too many of the wrong people to end up in vital jobs. But the report also points to some of the less obvious things, like the fact that learning is just harder to 'see' than student enrolment, teacher hiring or other potential areas of focus for education policy-makers (echoes here of James Scott's Seeing Like A State).

5. What gets measured, gets managed. Or does it? This need to make the learning crisis more visible motivates the authors to call for a big push on assessing learning. Ideally, this would involve a global learning metric, an idea that seems obviously sensible and relatively straightforward to me, but which is universally regarded by those more knowledgable than I am to be a diplomatic conundrum more complex to resolve than the Schleswig-Holstein Question. Absent a global metric, the authors suggest, more investment in national learning assessments would still be better than nothing.

I'm conflicted on this point. On the one hand, investing in better data seems an absolute no brainer; the first step to recovery is admitting you have a problem and all that. On the other hand, it's clear that even in countries where citizen-led learning assessments like ASER and UWEZO have taken root or where national Ministries have signed up to be part of regional exercises like PASEC, better data has not necessarily been the burning platform for which some might have hoped. It's hard not to conclude that you can equip Ministers with better data on the problem and more robust evidence on the types of policies that will and won't make a difference, but in the end whether anything changes comes down to finding reform-minded leaders with political courage, like Liberia's George K. Werner.

6. The power of And. I was pleased to see that the report tackled head-on the suggestion that more rigorous assessment of student learning necessarily involves a narrowing of focus to the exclusion of other things we might care about fostering in our young people, from character traits like resilience to the fuzzy but nevertheless crucial "21st century skills". To the extent that this argument has any merit in an OECD context (and I'm not sure that it does), it seems absurd given the scale of the quality crisis in the developing world and how intimately linked better teaching of the basics and improvements in some of these other areas are going to be. As the authors note: 

"Conditions that allow children to spend two or three years in school without learning to read a single word or to reach the end of primary school without learning two-digit subtraction are not conducive to reaching the higher goals of education."

The bottom line is that good schools can do both (one reason I'm glad our independent evaluators at Oxford are looking at both the cognitive and non-cognitive development of our students).

7. Forget about the price tag. The Report has been criticised from some quarters for saying too little about money, particularly when the ink has barely dried on Gordon Brown's Education Commission report calling for billions more each year to be funnelled into global education. Of course, the criticism of that report was precisely the mirror image of this: it provided highly detailed costings based on a series of assumptions about what will deliver quality education that had little basis in rigorous evidence. More generally, one of the problems with the discussion about resourcing is that more money is almost certainly both an input to and an output of more effective education reforms: if it were clearer that investments were delivering results, more money would flow into them.

8. Two sides of the same coin? Another criticism of the report is that it has relatively little to say about access, even though millions remain out of school and in some countries school enrolments appear to be dropping not rising. That said, the debate on access sometimes comes close to slipping into a kind of sequentialism - let's fix access, then worry about quality - and the report helpfully points out that they have to be addressed together (even if by different means). If students are not learning or are being asked to repeat grades, their (and their family's) motivation to stay in school falls.

9. Who benefits? In its discussion of fairness and equity, the report mostly focuses on within-country inequality, and the large gaps in access and achievement facing disadvantaged groups. Addressing these is clearly important, but I've noted elsewhere my concern that the lack of proper data on where most learners in developing countries sit in relation to a global 'cognitive poverty line' (analogous to the $1.90 a day global income poverty line) makes it easy to under-value the importance of improving outcomes for the millions of children who may be among the educationally better off in their own countries, but in global terms remain among the most disadvantaged in the world. One other comment on equity: the report usefully points out that fairness is not just about rich and poor students but about good and bad schools. An arresting statistic cited in the report is that in one study in Pakistan, the achievement gap on an English test between students in good and bad schools was 24 times bigger than that between richer and poorer students, even after controlling for student characteristics.

10. Private: no panacea? The report strikes a surprisingly cautious note on the potential contribution of private schools. Surprising in part because I had been reliably informed that the World Bank was secretly a vast conspiracy to push the privatization agenda of its paymasters in Big Edu(TM), but more because this seems to be one area where the Report seems to depart from what the evidence actually says. For example, the Report claims "there is no consistent evidence that private schools deliver better learning outcomes than public schools" and that such evidence as exists "may conflate the effects of private schools themselves with the effects of the type of students who enroll in private schools." Far be it from me to question the authors' interpretation of the literature (he says, preparing to do precisely that) but on the first claim it would seem that there is at least moderate evidence that private schools out-perform public schools, and that this performance advantage is mediated but not wholly eliminated when you control for observable student characteristics. But anyway, this minor quibble just goes to show that those of us who believe there is a complementary role for non-state school operators need to do a better job of building our evidence base. And the central claim of this part of the report - that "overseeing private schools may be no easier than providing quality schooling" - speaks to the fact that whether as a partner in initiatives like Partnership Schools for Liberia, or just as a regulator of private schools, we are talking about government as an enabling state, not a smaller state.

Positive early gains for Partnership Schools and Rising

‘Gold standard’ evaluation finds positive early gains for Partnership Schools and for Rising.

The evaluation team behind the Randomised Controlled Trial (RCT) of Partnership Schools for Liberia (PSL - okay, that’s enough three letter abbreviations) has just released their midline report. The report covers just the first year of PSL (September 2016-July 2017). A final, endline report covering the full three years of the PSL pilot is due in 2019.

While much anticipated, this is only a midline report with preliminary results from one year of a three year programme. The report therefore strikes a cautious tone and the evaluation team are careful to caveat their results. 

Nevertheless, there are important and encouraging early messages for PSL as a whole and for Rising in particular. Put simply, the PSL programme is delivering significant learning gains, and Rising seems to be delivering among the largest gains of any school operator.

For PSL as a whole, the headline result is that PSL schools improved learning outcomes by 60% more than in control schools, or put differently, the equivalent of 0.6 extra years of schooling.

These gains seem to be driven by better management of schools by PSL operators, with longer school days, closer supervision of staff and more on-task teaching resulting in pupils in PSL schools getting about twice as much instructional time as in control schools. PSL schools also benefited from having more money and having better quality teachers, particularly new graduates from the Rural Teacher Training Institutes. But the report is clear that, based on their data and the wider literature, it is the interaction of these additional resources and better management that makes the difference; more resources alone is not enough. (Anecdotally, I would add that our ability to attract these new teachers was at least in part because they had more confidence in how they would be managed, which illustrates the point that new resources and different management are not easily separated.)  

Rising Results

The report also looks at how performance varies across the 8 operators that are part of PSL. Even more than the overall findings, the discussion of operator performance is limited by the small samples of students the evaluation team drew from each school. For operators (like Rising) operating only a small number of schools, this means there is considerable uncertainty around the evaluators’ estimates. That said, the evaluation team do their best to offer some insights.

Their core estimate is that compared to its control schools. Rising improved learning outcomes by 0.41 standard deviations or around 1.3 additional years of schooling. This is the highest of any of the operators, though it is important to note the overlapping confidence intervals between several of the higher performing providers.

RCT - ITT estimate chart.jpg

However, this core estimate is what’s known as an “intent-to-treat” or ITT estimate. It is based on the 5 schools that were originally randomly assigned to Rising. But we only actually ended up working in 4 of those (* see below). The ITT estimate is therefore a composite of results from 4 schools that we operated and 1 school that we never set foot in. A better estimate of our true impact is arguably offered by looking at our impact just on those students in schools we actually ended up working in. This “treatment on the treated” or TOT estimate is considerably higher, with a treatment effect of 0.57 standard deviations or 1.8 extra years of schooling. This, again, is the highest of any operator, and by a considerably larger margin, though again the confidence intervals around the estimate are large.

RCT - ToT estimate chart.jpg

Whether the ITT or TOT estimate is the more useful depends, in my view, on the policy question you are trying to answer.  At the level of the programme as a whole, where the policy question is essentially "what will the overall effect of a programme like this be?”,  the ITT estimate seems the more useful because it is fair to assume that some level of ’non-compliance’ will occur and the programme won’t get implemented in all schools. But at the inter-operator level, where the salient policy question is “given that this is going to be a PSL school, what will be the impact of giving this school to operator X rather than operator Y?”, the TOT estimate seems more informative because it is based solely on results in schools where those operators were actually working. 

A further complication in comparing across operators is that operators have different sample sizes, pulled from different populations of students across different geographical areas. It cannot be assumed that we are comparing like with like. To correct for this, the evaluators control for observable differences in school and student characteristics (e.g. by using proxies for their income status, geographic remoteness etc), but they also use a fancy statistical technique called 'Bayesian hierarchical modelling'. Essentially, this assumes that because we are part of the same programme in the same country, operator effects are likely to be correlated. It therefore dilutes the experimental estimate for Rising by making it a weighted average of Rising's actual performance and the average performance of all the operators. It turns out that adjusting for baseline characteristics doesn’t make too much difference (particularly for Rising, since our schools were more typical), but this Bayesian adjustment does. It drags Rising back towards the mean for all operators, with the amount we are dragged down larger because our sample size is smaller. We still end up with the first or second largest effect depending on which of the ITT or TOT estimate is used, but by design we are closer to the rest of the pack.

Some reflections on the results

So what do we make of these results?

First of all, we are strongly committed to the highest levels of rigour and transparency about our impact. We had thought that the study wouldn’t be able to say anything specific about Rising at all for technical reasons to do with the research design (for nerdier readers: it was originally designed to detect differences between PSL schools and non-PSL schools, and was under-powered to detect differences among operators within PSL). We're glad the evaluation team were able to find some ways to overcome those limitations.

Second, it is interesting and encouraging that the results largely confirm the strong progress we had been seeing in our internal data. Those data looked promising, but absent a control group to provide a robust counterfactual, it was impossible to know for sure that the progress we were seeing was directly attributable to us. As we said at the time and as the evaluation team note in an appendix to this report, our internal data were for internal management purposes and were never meant to have the same rigour as the RCT. But as it turns out, our internal data and the RCT data are pretty consistent. Our internal data suggested that students had made approximately 3 grades' worth of progress in one academic year; the TOT estimate in the RCT is that they had made approximately 2.8 grades’ worth of progress in one academic year. Needless to say, knowing that we can have a good amount of conviction in what our internal data are telling us is very important from a management point of view.

Third, while making direct comparisons between operators is tricky for the reasons noted above, on any reasonable reading of this evidence Rising emerges as one of the stronger operators, and this result validates the decision by the Ministry of Education to allocate 24 new schools to Rising in Year 2. In both absolute and relative terms, this was one of the larger school allocations and reflected the Ministry’s view that Rising was one of the highest performing PSL operators in Year 1. It is good - not just for us but for the principle of accountability underlying the PSL programme as a whole - that the RCT data confirm the MoE’s positive assessment of Rising’s performance.

Taking up the challenge

I also want to be very clear about the limitations of the data at this stage. It is not just that it’s very early to be saying anything definitive. It’s also that these data do not yet allow Rising, or really any operator, to fully address two of the big challenges that have been posed by critics of PSL.

The first challenge is around cost. As the evaluators point out, different operators spent different amounts of money in Year 1, and all spent more money than would be typically made available in a government school. In the end, judgments about the success of PSL or individual operators within it will need to include some assessment not just of impact but of value for money. PSL can only be fully scaled if it can be shown to be effective and affordable. Rising was one of those operators whose unit costs were relatively high in Year 1. That’s because a big part of our costs is the people and the systems in our central team and with just 5 schools in year 1, we had few economies of scale. These costs should fall precipitously once they start to be shared over a much larger number of schools and students. But that’s a testable hypothesis on which the Ministry can hold us to account. In Year 2, we need to prove to them that we can deliver the same or better results at a significantly lower cost per student.

The second challenge is around representativeness. One criticism that has been aired is that Year 1 schools were the low hanging fruit. As the evaluation makes clear, it is simply not true that Year 1 schools were somehow cushy, but it is true that Year 1 schools were generally in easier to serve, somewhat less disadvantaged communities than the median Liberian school. And that’s precisely why the Ministry of Education insisted that the schools we and other operators will be serving in Year 2 be disproportionately located in the South East of Liberia, where those concerns about unrepresentativeness do not apply. If we can continue to perform well in these more challenging contexts, it will go some way to answering the question of whether PSL can genuinely become part of the solution for the whole of Liberia.

In short, the RCT midline provides a welcome confirmation of what our own data were telling us about the positive impact we are having. Our task for the coming academic year is to show that we can sustain and deepen that impact, in more challenging contexts, and more cost effectively. A big task, but one that we are hugely excited and honoured to be taking on.

A little over a year ago, Education Minister George Werner showed a great deal of political courage not just in launching this programme but in insisting that it be the subject of a ‘gold standard’ experimental evaluation. One year on, and these results show that his vision and conviction is beginning to pay dividends. This report is not the final word on PSL, but the next chapter promises to be even more exciting.

 

* Footnote: as the evaluators note in their report, the process of randomly assigning schools in summer 2016 was complex, made even more challenging by the huge number of moving pieces for both operators and the Government of Liberia as both endeavoured to meet incredibly tight timescales for opening schools on September 5th. Provisional school allocations changed several times; by August 14th, three weeks before school opening, we still did not know the identity of our fifth school and it was proving very difficult to find a pair of schools near enough to our other schools to be logistically viable. Faced with the choice of dragging the process out any longer and potentially imperilling operational performance or opting to run a fifth school that was not randomly assigned, we agreed with the Ministry on the latter course of action. 

Expanding our network in Liberia

Photo credit: Kyle Weaver

Photo credit: Kyle Weaver

We're proud to announce that the Ministry of Education has invited Rising Academies to significantly expand its school network in Liberia. From September 2017, Rising Academies will be operating 29 government schools across 7 counties.

The move comes as part of an expansion of the Partnership Schools for Liberia (PSL) program. To learn more about PSL, click here. The Ministry’s decision to award Rising more schools in the second year of PSL followed an in-depth review and screening process, including unannounced spot checks of PSL schools. Rising Academies was one of three providers to be awarded the top “A” rating for its strong performance in Year 1.

We're really proud of the progress our schools have made this year. If you want to learn more about how we've been rigorously tracking this progress and using data to inform our approach, check out our interim progress report here.

Our press release on the announcement of the Ministry's Year 2 plans is available here.

Benjamin's Story

Last week at the Investing in Education for the Future conference in Monrovia, Liberia, I had the opportunity to speak about our work under the Partnership Schools for Liberia initiative.

Benjamin Clarke. Principal, Sumo Town Public School

Benjamin Clarke. Principal, Sumo Town Public School

I chose to focus on the story of Benjamin Clarke (pictured right), the inspiring Principal of our school in Sumo Town, to illustrate the impact that PSL is having.

Here was the key bit of the speech:

When Benjamin took over as Principal in 2014 he inherited a school with just 1 payroll teacher and a part-time volunteer who could barely read. Even on a good day he had to teach 4 grades himself, as well as do all the admin, and the bad days outweighed the good. If he was sick or called away to a meeting, the school just didn’t operate.

Today, thanks to Partnership Schools, Benjamin has a qualified teacher in every classroom. He has a Master Teacher trained to observe lessons and give his teachers real-time feedback. His staff receive daily lesson plans with a focus on phonics and numeracy, and are trained in simple techniques that help them deliver more engaging lessons. And instead of being monitored almost never, he sees our team every week, and takes pride in showing them what is happening under his leadership.

I asked Benjamin if I could share his story with you tonight because I think it’s a reminder that for Partnership Schools to succeed it doesn't just need to correct the weaknesses in the Liberian education system, it needs to build on its strengths.

Check out the video for the full speech (5 mins).

Benjamin himself came along to the final day of the conference to share his experience of Partnership Schools with the delegates directly.