International Association of Educators   |  ISSN: 2834-7919   |  e-ISSN: 1554-5210

Original article | International Journal of Progressive Education 2014, Vol. 10(2) 89-102

The Use of Outcome Mapping in the Educational Context

Anna Lewis

pp. 89 - 102   |  Manu. Number: ijpe.2014.053

Published online: June 15, 2014  |   Number of Views: 111  |  Number of Download: 269


Abstract

Outcome Mapping is intended to measure the process by which change occurs, it shifts away from the products of the program to focus on changes in behaviors, relationships, actions, and/or activities of the people involved in the treatment program. This process-oriented methodology, most often used in designing and evaluating community development projects uses graduated progress markers to determine if the intervention is achieving the desired outcomes and forms the basis for additional monitoring and evaluation. This theoretical paper explores the use of Outcome Mapping as an alternative or supportive method of research design and evaluation in teaching and learning contexts. Outcome mapping can provide educational researchers with the tools to think holistically and strategically about the process and partners needed to achieve successful results. This paper discusses the relevance of this method and compares and contrasts it to the functionality, use, and outcome measures utilized in current educational assessments methods.

Keywords:


How to Cite this Article?

APA 6th edition
Lewis, A. (2014). The Use of Outcome Mapping in the Educational Context . International Journal of Progressive Education, 10(2), 89-102.

Harvard
Lewis, A. (2014). The Use of Outcome Mapping in the Educational Context . International Journal of Progressive Education, 10(2), pp. 89-102.

Chicago 16th edition
Lewis, Anna (2014). "The Use of Outcome Mapping in the Educational Context ". International Journal of Progressive Education 10 (2):89-102.

References
  1. Alkin, M., Daillak, R., & White, P. (1979). Using evaluations: Does evaluation make a difference? Beverly Hills, CA: Sage. [Google Scholar]
  2. Ambrose, K., Kelpin, K., Smutylo, T. (2012) Oxfam: Engendering Change Program Mid-Term Learning Review FINAL REPORT. Southern Africa, Horn & East Africa, and Americas Workshops. Oxfam Canada EC Program Mid-term Learning Review: Final Report, May 8, 2012. [Google Scholar]
  3. Barnett (Winter 1995). "Long Term Effects of Early Childhood Programs on Cognitive and School Outcomes". The Future of Children 5 (3): 25–50. [Google Scholar]
  4. Carden, F., Earl, S., & Smutylo, T. (2009). Outcome mapping: Building learning and reflection into development programs. International Development Research Centre (IDRC) [Google Scholar]
  5. Coffman, J. (2003-2004, Winter). Michael Scriven on the differences between evaluation and social science research. The Evaluation Exchange, 9(4). [Google Scholar]
  6. Cousins, J. B.,& Whitmore, E. (1998). Framing participatory evaluation. New Directions for Evaluation, 80, 5–23. [Google Scholar]
  7. Creswell, John W (2009). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. Sage Publications, Inc; 3rd edition (July 15, 2008). [Google Scholar]
  8. Currie J., Thomas D. (1995). "Does Head Start Make A Difference?" American Economic Review 85 (3): 341. [Google Scholar]
  9. Fournier, D. (Ed.) (1995). Reasoning in evaluation: lnferential links and leaps (New Directions for Evaluation no. 48). San Francisco: Jossey-Bass. [Google Scholar]
  10. Greene, J. C. (2005). A value-engaged approach for evaluating the Bunche-Da Vinci Learning Academy. New Directions for Evaluation, 106, 27–45. [Google Scholar]
  11. Greene, J. C. (2005a). A value-engaged approach for evaluating the Bunche-Da Vinci Learning Academy. New Directions for Evaluation, 106, 27–45.Greene, J. C. (2005b). Evaluators as stewards of the public good. In S. Hood, R. K. Hopson, & H. T. Frierson (Eds.), The role of culture and cultural context: A mandate for inclusion, truth, and understanding in evaluation theory and practice (pp. 7–20). Greenwich, CT: Information Age Publishing. [Google Scholar]
  12. Hansen, M., Alkin, M. C., & Wallace, T. L. (2013). Depicting the logic of three evaluation theories. Evaluation and Program Planning, 38, 34–43. [Google Scholar]
  13. Harris, J, Henderson, A. (2012) Coherence and responsiveness. Interactions. Volume 19 Issue 5, September + October 2012 Pages 67-71 ACM New York, NY, USA. [Google Scholar]
  14. Harris, J. , Henderson, A. (2012). Coherence and responsiveness. Interactions. 19, 5 (September  2012), pages 67-71. [Google Scholar]
  15. Hedges, L.V. & O’Muircheartaigh, C.A. (2010). Improving generalization from designed experiments. Working Paper, Northwestern University. [Google Scholar]
  16. Hedges, L.V. (2012). Sample Design for Group Randomized Trials, Presentation for the 2012 IES/NCER Summer Research Training Institute at Northwestern University. [Google Scholar]
  17. IDRC, (2005b). Facilitation manual and facilitator summary sheets: http://www.idrc.ca/en/ev-62236- 201-1-DO_TOPIC.html [Google Scholar]
  18. Kibel. B. M. (1999). Success stories as hard data: an introduction to results mapping. Springer Publications. [Google Scholar]
  19. Liu, X. (2010). Using and developing measurement instruments in science education: A Rasch modeling approach. Iap. [Google Scholar]
  20. Luskin, R., Ho, T. ( 2013 ) Comparing the intended consequences of three theories of evaluation. Evaluation and Program Planning. 38 (2013) pages 61-66. [Google Scholar]
  21. Mark,  M.  M.,  &  Henry,  G.  T.  (2004).  The  mechanisms  and  outcomes  of  evaluation  influence. Evaluation, 10, 35–57. [Google Scholar]
  22. Mark, M. M., Henry, G. T., & Julnes, G. (1999). Toward an integrative framework for evaluation practice. American Journal of Evaluation, 20, 177–198. [Google Scholar]
  23. Mathison, S. (2004a). An anarchist epistemology in evaluation. Paper presented at the annual meeting of the American Evaluation Association, Atlanta. [Google Scholar]
  24. Mathison, S. (2008). What is the difference between evaluation and research, and why do we care?  In [Google Scholar]
  25. N.  L. Smith & P. R. Brandon (Eds.), Fundamental issues in evaluation. New York, NY: The Guilford Press. [Google Scholar]
  26. Melvin M. Mark, Gary T. Henry, George Julnes , Melvin Mark, Gary Henry (2011). Evaluation: An Integrated Framework for Understanding, Guiding, and Improving Policies and Programs [Google Scholar]
  27. Miller, R. L., & Campbell, R. (2006). Taking stock of empowerment evaluation: An empirical review. American Journal of Evaluation, 27, 296–319. [Google Scholar]
  28. Ottawa.Earl, S., & Carden, F. (2002). Learning from complexity: The International Development Research Centre's experience with Outcome Mapping. Development in Practice, 12(3-4), 518- 524. [Google Scholar]
  29. Patton MQ. (1997) Utilization-focused evaluation. Thousand Oaks, CA: Sage Publications. [Google Scholar]
  30. Patton, M. Q. (1997). Utilization-focused evaluation: The new century text. Thousand Oaks, CA: Sage. [Google Scholar]
  31. Patton, M. Q., Grimes, P. S., Guthrie, K. M., Brennan, N. J., French, B. D., & Blythe, D. A. (1977). In search of impact: An analysis of the utilization of federal health evaluation research. In C. H. Weiss (Ed.), [Google Scholar]
  32. Priest, S. (2001). A program evaluation primer. Journal of Experiential Education, 24(1), 34-40. Rogers,        Patricia      (2012).  When    to           use        outcome             mapping. http://www.internationalpeaceandconflict.org/profiles/blog/show?id=780588%3ABlogPost%3A 728981&xgs=1&xg_source=msg_share_post Retrieved May 1, 2013. Professor of Public Sector Evaluation at the Royal Melbourne Institute of Technology University. [Google Scholar]
  33. Scriven, M. (1998, January 16). Research vs. evaluation. Message posted to bama.ua.edulcgi- binlwa?A2=ind9801C&L=evaltalk&P=R2131&1 =l&X= 2Fl1357E5870213C59&Y. [Google Scholar]
  34. Scriven, M. (1999). The nature of evaluation: Part I. Relation to psychology. Practical Assessment, Research and Evaluation, 6(11). [Google Scholar]
  35. Shaw, Ian Graham Ronald. (2006) The SAGE Handbook of Evaluation. London : SAGE Publications. [Google Scholar]
  36. Smutylo, T. (2005). Outcome mapping: A method for tracking behavioral changes in development programs, ILAC Brief 7. [Google Scholar]
  37. Smutylo,Terry (200) Outcome mapping: A method for tracking behavioral changes in development programs online under: http://www.outcomemapping.ca/resource/resource.php?id=182. [Google Scholar]
  38. Stufflebeam, D. (1999). Metaevaluation checklist. Retrieved December 15, 2005, from http://www.wmich.edu/evalctr/archive_checklists/eval_model_metaeval.pdf [Google Scholar]
  39. Trochim,    W.          (1998,   February             2).          Research             vs.          evaluation. Message posted      to bama.ua.edulcgibinlwa?A2=ind9802A&L=evaltalk&P=R503&l=1&x=089CE94613356B8693 &YUS DOE (2002 ) “What Works Clearinghouse” http://ies.ed.gov/ncee/wwc/ [Google Scholar]
  40. Weiss, C. H., & Bucuvalas, M. J. (1980). Social science research and decision making. New York, NY: Columbia University Press. [Google Scholar]
  41. Xiufeng Liu, Calvin S. Kalman (2010) Using and Developing Measurement Instruments in Science Education: A Rasch Modeling Approach (HC) (Science and Engineering Education Sources) Information Age Publishing. [Google Scholar]