Access the full text.
Sign up today, get DeepDyve free for 14 days.
S. Berger, D. McLAREN (2000)
Impact of Vitamin A Supplementation on Childhood Mortality. A Randomised Controlled Community Trial
(1992)
New Webster's Dictionary and Thesaurus of the English Language
C. Victora, M. Olinto, F. Barros, L. Nobre (1996)
Falling diarrhoea mortality in Northeastern Brazil: did ORT play a role?Health policy and planning, 11 2
N. Black (1996)
Why we need observational studies to evaluate the effectiveness of health careBMJ, 312
L. Delbeke (1980)
Quasi-experimentation - design and analysis issues for field settings - cook,td, campbell,dtPsychologica Belgica, 20
(1992)
ed.). Child Health Priorities for the 1990s. Baltimore: The Johns Hopkins University Institute for International Programmes
(1998)
On planning and implementing Vitamin A interventions : Linking scientific knowledge to effective action
(1992)
Impact Analysis for Programme Evaluation
A. Sommer, E. Djunaedi, A. Loeden, I. Tarwotjo, K. West, R. Tilden, L. Mele, T. Group (1986)
IMPACT OF VITAMIN A SUPPLEMENTATION ON CHILDHOOD MORTALITY A Randomised Controlled Community TrialThe Lancet, 327
J. Davanzo, J. Habicht (1986)
Infant Mortality Decline in Malaysia, 1946-1975
T. Cook, D. Campbell (1979)
Quasi-experimentation: Design & analysis issues for field settings
B. Kirkwood, S. Cousens, C. Victora, I. Zoysa (1997)
Issues in the design and interpretation of studies to evaluate the impact of community‐based interventionsTropical Medicine & International Health, 2
D. Sahn, R. Lockwood, N. Scrimshaw (1984)
Methods for the evaluation of the impact of food and nutrition programmes : report of a workshop on the evaluation of food and nutrition programmes
M. Graffar (1971)
[Modern epidemiology].Bruxelles medical, 51 10
Mason Jb, Habicht Jp (1984)
Stages in the evaluation of ongoing programmes.Food and Nutrition Bulletin
R. Frydman, G. Judge, W. Griffith, R. Hill, Tsoung-chao Lee (1981)
The Theory and Practice of EconometricsJournal of Business & Economic Statistics, 3
H. Martínez, M. Shekar, M. Latham, A. Sommer, K. West (1986)
VITAMIN A SUPPLEMENTATION AND CHILD MORTALITYThe Lancet, 328
P. Schmidt (1985)
The Theory and Practice of Econometrics
Habicht Jp, Mason Jb, H. Tabatabai (1984)
Basic concepts for the design of evaluation during programme implementation.Food and Nutrition Bulletin
J. Schlesselman (1982)
Case-Control Studies: Design, Conduct, Analysis
Gray Rh (1986)
Vitamin A supplementation and childhood mortality [letter]The Lancet, 2
Abstract The question of why to evaluate a programme is seldom discussed in the literature. The present paper argues that the answer to this question is essential for choosing an appropriate evaluation design. The discussion is centered on summative evaluations of large-scale programme effectiveness, drawing upon examples from the fields of health and nutrition but the findings may be applicable to other subject areas. The main objective of an evaluation is to influence decisions. How complex and precise the evaluation must be depends on who the decision maker is and on what types of decisions will be taken as a consequence of the findings. Different decision makers demand not only different types of information but also vary in their requirements of how informative and precise the findings must be. Both complex and simple evaluations, however, should be equally rigorous in relating the design to the decisions. Based on the types of decisions that may be taken, a framework is proposed for deciding upon appropriate evaluation designs. Its first axis concerns the indicators of interest, whether these refer to provision or utilization of services, coverage or impact measures. The second axis refers to the type of inference to be made, whether this is a statement of adequacy, plausibility or probability. In addition to the above framework, other factors affect the choice of an evaluation design, including the efficacy of the intervention, the field of knowledge, timing and costs. Regarding the latter, decision makers should be made aware that evaluation costs increase rapidly with complexity so that often a compromise must be reached. Examples are given of how to use the two classification axes, as well as these additional factors, for helping decision makers and evaluators translate the need for evaluation--the why--into the appropriate design--the how. This content is only available as a PDF.
International Journal of Epidemiology – Oxford University Press
Published: Feb 1, 1999
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.