At the Accelerating the Accelerators session at SOCAP15, participants had a chance to do a deep dive into understanding the current and future state of evaluation of Impact Accelerator and Incubator programs. The conversation focused on understanding the most effective methods of measuring relationships within the community, understanding metrics of external evaluators, and looking at different metrics of interest.

 

For this exercise we used a “3 Horizons” facilitation framework that helps people think through to a preferential future state of a system – in this case what program evaluation would ideally look like by the year 2020.

 

We started by looking at how program evaluation impacts the time of program staff performing the evaluations, and the value of the process.

1st Horizon

 

1st Horizon – Present Day

The process today is burdensome, frustrating, and fails to produce aggregated information that enables program managers to make strategic decisions about the future of their programs. Program evaluation is a significant time sink for most program staff. They currently are focused on self-evaluation using data collected from their cohorts and alumni and then must also fit in requests from third party evaluators.  These two channels for evaluation are not standardized, and frequently program staff are asked for data that require additional time and energy to collect.  

 

In 2015, there were a wide variety of external reports produced to shed light on the Impact Accelerator system and the role of accelerators in building strong economies:  

  1. Global Social Entrepreneurship Network – From Seed To Impact: Building the Foundations For a High Impact Social Entrepreneur Ecosystem
  2. UBI Global – Top Social Incubators in the US Benchmark Report
  3. Rockefeller Foundation – Accelerating Impact: Exploring Best Practices, Challenges, and Innovations in Impact Enterprise Acceleration
  4. Unitus & Capria – 2015 Global Best Practices Report on Incubation and Acceleration
  5. Brookings Institute – Accelerating Growth: Startup Accelerator Programs in the United States

 

Each of these reports:

  • Required time from program managers to answer their surveys
  • Included a variety of disparate players in the system without effectively showing the whole industry
  • Used a variety of disparate metrics in their evaluation process requiring program managers to uncover new data, aggregate data in new ways, or otherwise look at their program from a new angle.

 

The value from these reports was inconsistent.  Frequently it seemed like the intended audience for the report was funders, or other players in the ecosystem rather than the programs themselves (with a notable exception to the Rockefeller Foundation & GSEN reports.)  

 

The other source of the inconsistency was in the quality of the collected data (i.e. the number of errors or mistakes in the data) as well as the quality of the analysis conducted to transform the data into meaningful information.

2nd Horizon

 

2nd Horizon – The Future in 2020

The community of accelerators gathered for the Accelerating the Accelerators session were by no means homogeneous. They represented incubators and accelerators from 17 different countries, represented a wide range of sectors, and supporting entrepreneurs at a variety of stages.  Even with these differences, a shared vision for the future emerged – though it was acknowledged it will be very difficult to get there.  

 

Most acknowledged that some level of standardization of collected metrics would be an ideal state- especially normalization for social entrepreneurs around data that they already have to collect and report to their funders. This desire for alignment with funders emerged time and again, showing the depth of understanding that program managers have of the importance of funders to individual S.E.s, to programs, and to the health of the system as a whole.  

 

Individual groups came up with different strategies to tackle the issue of standardization – from a Consortium of Evaluators – who would all have access to a wide pool of raw data but be able to evaluate it using different frameworks; to a Common Platform – where the application process would seamlessly connect to an evaluation system making it as easy as possible for both entrepreneurs, program managers, and funders to keep data up to date.

 

The ideal future state would require little to no additional time, would include the perspectives of social entrepreneurs, programs, and funders, would have a series of standardized metrics, and would provide value to each of the stakeholders.

 

3rd Horizon – How to we get from here to there?

The groups were then asked to look at what would be required in 2019 to get us to this vision in 2020, and what would be required in 2018, 2017, and 2016 to ensure a less painful and more effective evaluation system.  

 

2016 – Program managers called out the importance of learning from the traditional business world.  Not just on the evaluation used for accelerator programs, but also on the metrics required of entrepreneurs.  The current system is dysfunctional when it comes to impact measurement and what impact investors require of entrepreneurs – creating pain points for social entrepreneurs that are not experienced by entrepreneurs in other sectors.  Impact Investors have a wide range of metrics that are not consistent between investors and require entrepreneurs to spend a significant amount of time in reporting. A process that sometimes takes away resources from actually achieving the mission of the organization. While financial focused investors have one clear and consistent set of metrics that are synced up on the timeline for when the entrepreneur needs to report, Impact Investors frequently require different information at different times.

 

The Global Accelerator Learning Initiative (GALI) has already taken steps to create a standardized application and longitudinal survey that enables multiple stakeholder groups to use different evaluation frameworks on the same pool of data.  Unfortunately this process is “really hard” for both program managers and entrepreneurs.  The fundamental objective of the GALI initiative to produce data that can be used for peer-reviewed academic literature and is IRB compliant.  This objective places a series of constraints and requirements for rigor that dissuades some programs from participation.  In addition, GALI is looking at the Global Accelerator landscape – not just the Impact Accelerators.  However, this initiative is an important and strong step in understanding the strengths and limitations of collaborative efforts within the Impact Accelerator system.  

 

2017-2018 – Within a few years, programs were interested in seeing a consortium of third party evaluators coming together to understand:

  1. What is the value to the third party evaluators in evaluating the Impact Accelerator System?
  2. Which organizations are representing funders who require this information to make more strategic investing decisions?
  3. Are any of these evaluators creating a set of metrics that would support the self evaluation of individual programs?

 

There were a few collaborative conversations identified that would need to happen to bring metrics into alignment – and once again the program managers recognize that standardization is really hard and do not take this process lightly.

 

The conversations identified were:

  1. Programs Managers Alone – understanding what is really needed for self evaluation.
  2. Program Managers + Social Entrepreneurs + Funders + Platforms – diving into the impact of evaluation on SE’s and seeing if we can ease the reporting burden on SE’s while still providing funders with the information they require to report on effectiveness and providing programs with what they must do starting at the application phase to support effective evaluation frameworks.
  3. Programs + Funders + Evaluators – bringing into alignment the data required to create the information that funders are seeking to evaluate the effectiveness of both individual social enterprises as well as accelerator programs.
  4. Evaluators Alone – understanding the fundamental drivers for why third party agents are focused on evaluation of the impact ecosystem and how their underlying competition could be set aside to create a more effective process that still meets the needs of evaluation organizations.
  5. Programs + Social Entrepreneurs + Evaluators – doing a deep dive to look at evaluation processes that have been used in the past, and dissecting the main sources of pain and promote inclusion of program manager perspectives in the design of new evaluation frameworks.

 

2020 – This whole process is designed to bring the impact ecosystem into the next stage of maturity.  The final objective is for social entrepreneurs, funders, and programs to be able to collect, analyze, and report the data necessary to illustrate impact.

 

The financial system also went through a long and messy processes to come to the standard financial metrics that companies report today.  It took decades for this process to come into alignment, and we must be patient in the impact ecosystem understanding that the creation of industry standards takes time.  The Generally Accepted Accounting Principals were first conceived of in 1932 after the trauma of the Great Depression, and took by some estimates almost 30 years to be fully adopted and are still only “generally accepted” and not a complete global standard.  Five years will likely not be enough time, but as entrepreneurs, impact investors, and program managers grow increasingly savvy at measuring their impact, it will become easier and easier to report the effectiveness of entrepreneurs and programs in address the critical challenges facing the future of our planet.  

 

There is an underlying psychological and emotional weight that must be acknowledged that is unique to the impact ecosystem – we have put it upon ourselves that we are developing the systems that will help both people and the planet Einstein picturethrive in the future.  This is a huge psychological burden to bear, and there is a natural resistance to the measurement of effectiveness, as we have so much more to lose if it is proven that we are failing to achieve our respective missions.  Part of the process required to help the impact ecosystem mature is to work with one another to be cooperative.  We must remember as Einstein said; “we cannot solve our problems with the same thinking we used when we created them.”