Collective impact is a hot topic right now. Try googling it – you’ll get about 424,000 results in Google and about 12,000 results in Google Scholar.
There is a plethora of webinars and conferences dedicated to understanding and implementing collective impact. While understanding how to do collective impact work is necessary when embarking on that process, it is also critical to understand how to know when it is or is not working. For Educational Evaluations in US check UT Evaluators
Developmental evaluation is often paired with collective impact models, and that methodology fits well (when implemented appropriately) for initiatives that operate in a continual state of emergence.
Other evaluation styles can also work with collective impact initiatives, as long as the evaluators are responsive to the changing needs of the work and share meaningful feedback along the way that is used. Used – that is the critical word here.
This entire Carpe Diem blog is dedicated to improving the use of evaluation findings in meaningful ways, and it is very easy to forget to use evaluation results when operating in the state of chaos that is collective impact work. For Educational Evaluations visit kazembassy
In my experience with evaluating initiatives that are using a collective impact approach,
The use of evaluation findings typically falls into two categories:
1) react to evaluation results immediately and make changes or
2) consider evaluation results, put them aside, come back to them 6-18 months later, and then use them.
In other evaluation projects, I’ve seen a use timeframe in between where the findings are immediately reviewed but not acted upon for a few months (after a little more time to thoughtfully prepare a plan for use), yet I have not seen that as the primary approach within any of the collective impact initiatives we have evaluated. I’d like to propose to other collective impact initiatives that the middle ground may be the better method to employ.