- Alkin, M., Daillak, R., & White, P. (1979). Using evaluations: Does evaluation make a difference? Beverly Hills, CA: Sage. [Google Scholar]
- Ambrose, K., Kelpin, K., Smutylo, T. (2012) Oxfam: Engendering Change Program Mid-Term Learning Review FINAL REPORT. Southern Africa, Horn & East Africa, and Americas Workshops. Oxfam Canada EC Program Mid-term Learning Review: Final Report, May 8, 2012. [Google Scholar]
- Barnett (Winter 1995). "Long Term Effects of Early Childhood Programs on Cognitive and School Outcomes". The Future of Children 5 (3): 25–50. [Google Scholar]
- Carden, F., Earl, S., & Smutylo, T. (2009). Outcome mapping: Building learning and reflection into development programs. International Development Research Centre (IDRC) [Google Scholar]
- Coffman, J. (2003-2004, Winter). Michael Scriven on the differences between evaluation and social science research. The Evaluation Exchange, 9(4). [Google Scholar]
- Cousins, J. B.,& Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80, 5–23. [Google Scholar]
- Creswell, John W (2009). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications, Inc; 3rd edition (July 15, 2008). [Google Scholar]
- Currie J., Thomas D. (1995). "Does Head Start Make A Difference?" American Economic Review 85 (3): 341. [Google Scholar]
- Fournier, D. (Ed.) (1995). Reasoning in evaluation: lnferential links and leaps (New Directions for Evaluation no. 48). San Francisco: Jossey-Bass. [Google Scholar]
- Greene, J. C. (2005). A value-engaged approach for evaluating the Bunche-Da Vinci Learning Academy. New Directions for Evaluation, 106, 27–45. [Google Scholar]
- Greene, J. C. (2005a). A value-engaged approach for evaluating the Bunche-Da Vinci Learning Academy. New Directions for Evaluation, 106, 27–45.Greene, J. C. (2005b). Evaluators as stewards of the public good. In S. Hood, R. K. Hopson, & H. T. Frierson (Eds.), The role of culture and cultural context: A mandate for inclusion, truth, and understanding in evaluation theory and practice (pp. 7–20). Greenwich, CT: Information Age Publishing. [Google Scholar]
- Hansen, M., Alkin, M. C., & Wallace, T. L. (2013). Depicting the logic of three evaluation theories. Evaluation and Program Planning, 38, 34–43. [Google Scholar]
- Harris, J, Henderson, A. (2012) Coherence and responsiveness. Interactions. Volume 19 Issue 5, September + October 2012 Pages 67-71 ACM New York, NY, USA. [Google Scholar]
- Harris, J. , Henderson, A. (2012). Coherence and responsiveness. Interactions. 19, 5 (September 2012), pages 67-71. [Google Scholar]
- Hedges, L.V. & O’Muircheartaigh, C.A. (2010). Improving generalization from designed experiments. Working Paper, Northwestern University. [Google Scholar]
- Hedges, L.V. (2012). Sample Design for Group Randomized Trials, Presentation for the 2012 IES/NCER Summer Research Training Institute at Northwestern University. [Google Scholar]
- IDRC, (2005b). Facilitation manual and facilitator summary sheets: http://www.idrc.ca/en/ev-62236- 201-1-DO_TOPIC.html [Google Scholar]
- Kibel. B. M. (1999). Success stories as hard data: an introduction to results mapping. Springer Publications. [Google Scholar]
- Liu, X. (2010). Using and developing measurement instruments in science education: A Rasch modeling approach. Iap. [Google Scholar]
- Luskin, R., Ho, T. ( 2013 ) Comparing the intended consequences of three theories of evaluation. Evaluation and Program Planning. 38 (2013) pages 61-66. [Google Scholar]
- Mark, M. M., & Henry, G. T. (2004). The mechanisms and outcomes of evaluation influence. Evaluation, 10, 35–57. [Google Scholar]
- Mark, M. M., Henry, G. T., & Julnes, G. (1999). Toward an integrative framework for evaluation practice. American Journal of Evaluation, 20, 177–198. [Google Scholar]
- Mathison, S. (2004a). An anarchist epistemology in evaluation. Paper presented at the annual meeting of the American Evaluation Association, Atlanta. [Google Scholar]
- Mathison, S. (2008). What is the difference between evaluation and research, and why do we care? In [Google Scholar]
- N. L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation. New York, NY: The Guilford Press. [Google Scholar]
- Melvin M. Mark, Gary T. Henry, George Julnes , Melvin Mark, Gary Henry (2011). Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs [Google Scholar]
- Miller, R. L., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American Journal of Evaluation, 27, 296–319. [Google Scholar]
- Ottawa.Earl, S., & Carden, F. (2002). Learning from complexity: The International Development Research Centre's experience with Outcome Mapping. Development in Practice, 12(3-4), 518- 524. [Google Scholar]
- Patton MQ. (1997) Utilization-focused evaluation. Thousand Oaks, CA: Sage Publications. [Google Scholar]
- Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage. [Google Scholar]
- Patton, M. Q., Grimes, P. S., Guthrie, K. M., Brennan, N. J., French, B. D., & Blythe, D. A. (1977). In search of impact: An analysis of the utilization of federal health evaluation research. In C. H. Weiss (Ed.), [Google Scholar]
- Priest, S. (2001). A program evaluation primer. Journal of Experiential Education, 24(1), 34-40. Rogers, Patricia (2012). When to use outcome mapping. http://www.internationalpeaceandconflict.org/profiles/blog/show?id=780588%3ABlogPost%3A 728981&xgs=1&xg_source=msg_share_post Retrieved May 1, 2013. Professor of Public Sector Evaluation at the Royal Melbourne Institute of Technology University. [Google Scholar]
- Scriven, M. (1998, January 16). Research vs. evaluation. Message posted to bama.ua.edulcgi- binlwa?A2=ind9801C&L=evaltalk&P=R2131&1 =l&X= 2Fl1357E5870213C59&Y. [Google Scholar]
- Scriven, M. (1999). The nature of evaluation: Part I. Relation to psychology. Practical Assessment, Research and Evaluation, 6(11). [Google Scholar]
- Shaw, Ian Graham Ronald. (2006) The SAGE Handbook of Evaluation. London : SAGE Publications. [Google Scholar]
- Smutylo, T. (2005). Outcome mapping: A method for tracking behavioral changes in development programs, ILAC Brief 7. [Google Scholar]
- Smutylo,Terry (200) Outcome mapping: A method for tracking behavioral changes in development programs online under: http://www.outcomemapping.ca/resource/resource.php?id=182. [Google Scholar]
- Stufflebeam, D. (1999). Metaevaluation checklist. Retrieved December 15, 2005, from http://www.wmich.edu/evalctr/archive_checklists/eval_model_metaeval.pdf [Google Scholar]
- Trochim, W. (1998, February 2). Research vs. evaluation. Message posted to bama.ua.edulcgibinlwa?A2=ind9802A&L=evaltalk&P=R503&l=1&x=089CE94613356B8693 &YUS DOE (2002 ) “What Works Clearinghouse” http://ies.ed.gov/ncee/wwc/ [Google Scholar]
- Weiss, C. H., & Bucuvalas, M. J. (1980). Social science research and decision making. New York, NY: Columbia University Press. [Google Scholar]
- Xiufeng Liu, Calvin S. Kalman (2010) Using and Developing Measurement Instruments in Science Education: A Rasch Modeling Approach (HC) (Science and Engineering Education Sources) Information Age Publishing. [Google Scholar]
|