Can Local Staff Reliably Assess Their Own Programs? A Confirmatory Test-Retest Study of Lot Quality Assurance Sampling Data Collectors in Uganda

Journal Article
  • Colin A. Beckworth
  • Robert Anguyo
  • Francis Cranmer Kyakulaga
  • Stephen K. Lwanga
  • Joseph J. Valadez
BMC Health Services Research
Aug. 2016; 16: 396. DOI: 10.1186/s12913-016-1655-4.



Data collection techniques that routinely provide health system information at the local level are in demand and needed. LQAS is intended for use by local health teams to collect data at the district and sub-district levels. Our question is whether local health staff produce biased results as they are responsible for implementing the programs they also assess.


This test-retest study replicates on a larger scale an earlier LQAS reliability assessment in Uganda. We conducted in two districts an LQAS survey using 15 local health staff as data collectors. A week later, the data collectors swapped districts, where they acted as disinterested non-local data collectors, repeating the LQAS survey with the same respondents. We analysed the resulting two data sets for agreement using Cohen’s Kappa.


The average Kappa score for the knowledge indicators was k = 0.43 (SD = 0.16) and for practice indicators k = 0.63 (SD = 0.17). These scores show moderate agreement for knowledge indicators and substantial agreement for practice indicators. Analyses confirm that respondents were more knowledgeable on retest; no evidence of bias was found for practice indicators.


The findings of this study are remarkably similar to those produced in the first reliability study. There is no evidence that using local healthcare staff to collect LQAS data biases data collection in an LQAS study. The bias observed in the knowledge indicators was most likely due to a ‘practice effect’, whereby respondents increased their knowledge as a result of completing the first survey; no corresponding effect was seen in the practice indicators.