Busan is in the air: GAVI gets in front

6 Min Read

 

One of the good things about a big international meeting is that agencies become motivated to deliver on commitments made at earlier big international meetings. As aid-world gears up for the OECD aid effectiveness meeting in Busan in November, agencies are checking their compliance with recommendations from earlier meetings, which include improving accountability, defining measures and standards of performance, and monitoring and evaluating implementation (see here).

 

One of the good things about a big international meeting is that agencies become motivated to deliver on commitments made at earlier big international meetings. As aid-world gears up for the OECD aid effectiveness meeting in Busan in November, agencies are checking their compliance with recommendations from earlier meetings, which include improving accountability, defining measures and standards of performance, and monitoring and evaluating implementation (see here).

Such is the case for the GAVI Alliance, which has recently posted performance indicators on their beautifully redesigned website. There is a lot to admire in this effort.

First, only a very few, well-defined, mostly quantitative indicators are included, no more than three measures under each goal. There is a baseline. A source of data is included.

Second, GAVI has made the measures easily understandable and public, enabling GAVI-watchers to keep track of the Alliance’s achievements.

Third, GAVI includes a health systems goal, but is measuring its achievement in terms of its impact on timely and full vaccination, the health system-modifiable outcomes GAVI seeks to affect. This is good because previous efforts to define health system strengthening outcomes have been a bit squishy.

But there are a couple of issues that jump out:

What happened to vaccine-preventable diseases? Although vaccines have proven efficacy, they are not necessarily effective unless delivered correctly (meaning cold chain maintained, no immunity interference). In the absence of any data on effective coverage, a crude way to look at this issue is to look at cases of vaccine-preventable diseases (VPD) in the context of reported coverage. Why skip from the number of vaccinated kids to the under-5 mortality rate without stopping in between to understand what is going on with VPD? This is the most proximate outcome of vaccination, mediating the impact of vaccination on under-5 mortality, and what we would like to know if GAVI accomplishes.

How do the indicators help with learning? You might think that impact evaluation isn’t necessary because vaccines are proven efficacious, ideally applied to the universe of children, and not something you implement experimentally.  Yet given that effectiveness depends on delivery, it would be very helpful to know which delivery strategies actually work best. This would imply doing some impact evaluation on different delivery strategies, defining related indicators, perhaps using the cash-based windows, rather than just counting numbers of kids and hoping for enlightenment.

Why continue to rely on administrative data? GAVI continues to use the WHO/UNICEF coverage estimates to measure vaccination coverage.  These estimates are primarily based on country-reported administrative data, sometimes adjusted using household survey data and vetted by national authorities. The exact nature of the adjustments and the vetting is not transparent (see here for a description). We know that administrative data is poor quality since administrative estimates of coverage vary wildly from household survey based estimates.  Lim et al have already illustrated these gaps, which are significant. If GAVI is going to spend $7.6 billion between now and 2016, it’s worth conducting household-based rapid coverage surveys instead of relying on someone’s best guess. I make a case for this investment here.

Can GAVI measure deaths averted? How is it that the annual number of deaths averted as a result of vaccination goes from around 390 thousand in 2010 to 3.9 million in 2015? I’m guessing that this is the modeled effect of scaling up pneumo and rota, but it would be nice to have a link to the model. In order to directly measure this impact in real life, there would have to be an enormous improvement in the quality of death registration data. Is that likely to happen? And if we continue to rely so heavily on modeling, what is the meaning of this indicator for accountability purposes? Maybe this is one of those indicators that is better used for ex ante impact analysis, rather than year to year performance tracking.

Bottom line: It looks good and is headed in the right direction, but the “results and evidence” tab needs to rely on more rigorously measured results to be useful for more than advocacy.

TAGGED:
Share This Article
As a healthcare blogger and author, I have been writing about the latest developments in the medical field for over 10 years. My work has been featured on various online publications, including Healthline and WebMD. I am passionate about educating people on how to stay healthy through proper nutrition and exercise practices. In addition to my blog posts, I have also authored several books that focus on health topics such as dieting tips, disease prevention strategies, and mental health awareness initiatives. My goal is to provide readers with reliable information so they can make informed decisions regarding their well-being.
Exit mobile version