, You are here: Wiki-Summaries >> Monitor, Report, Evaluate, Improve >> Reporting >> Policy/Program Status/Capacity Surveys
|
This Section: Monitor, Report, Evaluate, Improve (MREI)
|
Monitor, Report, Evaluate to Improve (MREI) - Policy, Program, Practices, Capacity Surveys
<
>
Externally driven and regularly administered policy, program, practices & capacity surveys are another means by which the reach, quantity, duration, features, accessibility and participation in specific interventions, multi-intervention programs and multi-component approaches can be monitored and reported. Policy & program surveys are different than the voluntary use of self-assessment tools whereby schools, agencies or ministries decide to use a survey for their own improvement planning. Policy and program surveys are also different than ad-hoc global regional or national updates and reports which are done when funding is available. Requiring all entities within an organization to complete the survey, random sampling or efforts to ensure a sufficient number of responses can increase the validity of the survey. Using controls and comparison groups through commissioned independent research increases validity even further but is often beyond the resources of many organizations. Policy and program surveys can report on the status, reach, access/client participation, key features of the intervention, program or approach. Other questions can report on capacities such as assigned coordinators, coordination committees, the use of specific implementation, maintenance and scaling up (IMSS) practices as well as organizational capacities and routines and the degree to which the policy or program has been integrated within the education or other participating systems. Examples of such surveys include applications to specific interventions such as school feeding, multi-intervention programs on tobacco and multi-component approaches such as Child Friendly Schools or Health Promoting Schools. There is an assumption that the extrinsic motivation of externally driven and administered policy and program surveys will lead to improvements in practice. Consequently, advocates, policy-makers and officials often seek to invert the motto that "what matters is monitored" to "monitor to make it matter" to promote the policies and programs that they are already implementing. There have been few studies describing or evaluating the impact of externally driven reports in motivating or supporting significant improvements in policy, programs or practices.
This summary was developed from an ISHN project on Monitoring and Reporting that was done in cooperation with the International Union for Health Promotion and Education, with funding provided by the USA Centers for Disease Control & Prevention) This summary was first posted in June 2012 and revised in May 2021 in May 2021 with support from a project done through Simon Fraser University and UNICEF. Currently it has been posted as as a"revised edition" version. The following individuals or organizations have contributed to the development of this topic. Albert Lee, Christine Beyer, Nancy Hudson, Candace Currie, Vivian Barnekow and Doug McCall. We encourage readers to submit comments or suggested edits by posting a comment on the Mini-blog & Discussion Page for this section or posting a comment below: Externally driven and regularly administered policy, program, practices & capacity surveys are another means by which the reach, quantity, duration, features, accessibility and participation in specific interventions, multi-intervention programs and multi-component approaches can be monitored and reported. Such surveys are different than the voluntary use of self-assessment tools whereby schools, agencies or ministries decide to use a survey for their own improvement planning. Policy and program surveys are also different than ad-hoc global regional or national updates and reports which are done when funding is available. Requiring all entities within an organization to complete the survey, random sampling or efforts to ensure a sufficient number of responses can increase the validity of the survey. Using controls and comparison groups through commissioned independent research increases validity even further but is often beyond the resources of many organizations. Policy and program surveys can report on the status, reach, access/client participation, key features of the intervention, program or approach. Other questions can report on capacities such as assigned coordinators, coordination committees, the use of specific implementation, maintenance and scaling up (IMSS) practices as well as organizational capacities and routines and the degree to which the policy or program has been integrated within the education or other participating systems. Examples of such surveys include applications to specific interventions such as school feeding, multi-intervention programs on tobacco and multi-component approaches such as Child Friendly Schools or Health Promoting Schools. There is an assumption that the extrinsic motivation of externally driven and administered policy and program surveys will lead to improvements in practice. Consequently, advocates, policy-makers and officials often seek to invert the motto that "what matters is monitored" hoping to "monitor to make it matter" to promote the policies and programs that they are already implementing. There have been few studies describing or evaluating the impact of externally driven reports in motivating or supporting significant improvements in policy, programs or practices.
Examples of externally driven policy and program surveys at the global, regional national level include:
This summary was initially developed from an ISHN project on Monitoring and Reporting that was done in cooperation with the International Union for Health Promotion and Education, with funding provided by the USA Centers for Disease Control & Prevention) This summary was first posted in June 2012 and revised in May 2021 with support from a project done through Simon Fraser University and UNICEF. Currently it has been posted as as a"revised edition" version. The following individuals or organizations have contributed to the development of this topic. Albert Lee, Christine Beyer, Nancy Hudson, Candace Currie, Vivian Barnekow and Doug McCall. We encourage readers to submit comments or suggested edits by posting a comment on the Mini-blog & Discussion Page for this section or posting a comment below: Text
This summary was first posted in June 2012. Currently it has been posted as an "excerpt/adaptation", "first draft" or "revised draft" and "first or revised edition" version. The following individuals or organizations have contributed to the development of this topic. We encourage readers to submit comments or suggested edits by posting a comment on the Mini-blog & Discussion Page for this section or posting a comment below: Due to the length of Handbook Sections (similar to a book chapter) prepared for this web site and knowledge exchange program, we post these documents as separate documents. Click on this web link to access the draft or completed version on this topic. Come back to this page to post any comments or suggestions. Bibliography/Toolbox on
Key research, reports and resources on this topic are highlighted below. Many of the topics in this web site also have extensive bibliographies/toolboxes (BTs) published as separate documents. Click on this web link to access the full version of our Bibliography/Toolbox on this topic. These lists use our outline for these collections that we have developed over several years of curating these materials.
The following additional resources are posted on this web site or published by other credible sources. Please send any suggested additions to i[email protected] |
For updates and reader comments on this section, go to our Mini-Blog on Monitor-Report-Evaluate-Improve (MREI)
The number of summaries completed or drafted in this section are listed below) - Overview
|