What does success look like?
This post is for people who want to make their group succeed and believe in the importance of measuring success. So lets start there.
The core measurement of a business is profitability. Clearly that's simplistic and ignores business growth, shareholders and "triple bottom lines" and the like, but when all is said and done the question a business needs to answer to define success is "are we making a profit?"
What falls from this question is a pretty clear set of key metics that businesses use to determine whether their strategy is going to help achieve their benchmarks (because if they meet their benchmarks, they should be successful/profitable).
For advocacy groups, social-service non-profits and foundations everything gets more complicated. Profit is easy to measure and metrics that drive it are relatively easy to devine. But these social-sector organizations need to start defining success by answering the question "are we making an impact?" What constitutes impact isn't a one-size-fits-all proposition. The discussions at many a board meeting or the hammering out of a mission statement are designed to help clarify what success means to this particular organization.
We know what success looks like. How do we get there?
Strategy is your theory on how to achieve success. Strategy is like a chain: you attach one end to your goals and yourself to the other. Your execution is the physical effort it takes to bring you to your goals. So your strategy is only as strong as the weakest link in the chain. How do you know if one of these links is going to break? You measure and compare to your benchmarks.
Your benchmarks will not only tell you if your execution is unlikely to result in your goals (e.g. the chain is breaking) but must also give you the information you need to change strategies (e.g. you're pulling the chain in the wrong direction).
Setting benchmarks and defining metrics
"What gets measured gets done" is the old chestnut that drives the classic view of setting your benchmarks and key performance indicators (KPIs). But this statement has a hidden dark side: if you measure the wrong things, those are the things that will get done. Decisions on what to measure will impact your ability to execute your strategy.
In the world of online community building, engagement and advocacy, we are inundated with more datapoints about our actions than we could possibly absorb. Google Analytics alone measures thousands of pieces of data about how your website is being discovered and used. Your email marketing program, your donation database, your field operation -- all of these add to the sea of data.
The Lean Startup Non-Profit
Eric Reis is an entrepreneur, a businessperson's businessman, and wrote the best selling B-school hit The Lean Startup. The dirty little secret is that this is the best book on non-profit measurements, even if he doesn't know it.
Reis is obsessed with metrics -- and separating the good and the bad. Otherwise, we're back in the sea of data, or as Eric Reis puts it "How do we know that the changes we've made are related to the results we're seeing? More important, how do we know that we are drawing the right lessons from those changes?" Reis posits that any worthwhile metric needs to demonstrate three A's: it must be "actionable, accessible, and auditable." Metrics that don't pass this test must be viewed with suspicion -- they easily lead to false positives where you think you are on the right track, but in reality you are just spinning your wheels. Reis calls these "vanity metrics."
Good metrics and vanity metrics for non-profits online
Lets apply the three A's test to common online non-profit metrics, and see if it helps make sense of the sea of data in your website traffic.
- Overall traffic: these raw numbers, even over the course of time, aren't actionable -- if they improve, we'll assume it was our smart strategy, but if they fail, it is the fault of the news cycle or overall interest in our issue. Most raw numbers (including social media data like number of Twitter followers or Facebook Likes) are the worst kind of vanity metrics: two different people can read the same number as awesome or horrific depending on their perspective. These numbers don't tell you an actionable story: you don't know what to do.
- Pageviews, Visitors, Visits: Google Analytics gives you all three metrics for your entire website at the same time. This presents a challenge to being accessible. A good metric needs to be accessible to pretty much everyone: precise enough for the technical team, visual enough for the marketing and grounded in real-world behavior for volunteers to understand. Having to explain that a "visitor" is technically one browser within a 30 minute window (and so could be the same person multiple times over the course of a day) is a good way to confuse and disengage your audience.
- But where these online metrics succeed is that they are clearly auditable, within reason. You can give anyone access to the data. However there are two challenges to be aware of. (1) You need to know where the data is coming from. Does website traffic include your own office? Is the tracking code installed on all pages (including your online petitions, newsletters, etc)? (2) Someone might read the data in a different way. This is probably one of those challenges that is in reality an opportunity to educate and engage people across your organization on metrics.
Some of the better online metrics that we recommend clients use are:
- Homepage "bounce rate" -- the percentage of people who view only the homepage of your website, and then leave. Charting this against time and looking for changes in content (what was on the homepage when the bounce rate was low/good?) and against traffic sources can give you clear "marching orders."
- On social media, we recommending looking at engagement rates like:
- Interaction rate: the number of interactions (# of retweets, replies, or likes of your posts) divided by the number of followers.
- Click rate: the number of clicks on the link in your posts divided by the number of followers.
- Clicks per post: number of clicks on the link in your posts.
Rinse and repeat: the necessity of experiments
Experiments are where you utilize your metrics the most. Sure, you can view how metrics change over time, but that just tells you what external factors are affecting your baseline. Experiments are how you determine a course of action that helps you better execute your strategy. An experiment can be as formal as an A/B test, or as informal as adding content more often and assessing the results. Experiments (or in Reis' language "validated learning") determine what changes makes a difference, and allow you to estimate the ROI of rolling out the experimental version as the new normal. The more dramatic the difference is between your experimental version and your control, the more confidence you can have in your results. You can run experiments designed to beat your current benchmarks on a metric, and you can run experiments without making a hard definition of your benchmarks -- you can start from your current base-line and then see how far you can push.
Where to go from here
If you were looking for the paint-by-numbers kit on how to measure your success or impact, you are no doubt disappointed. The truth is, only someone who knows your mission and knows how you are defining impact and success can help you measure it.
What I hope I have provided is a diagnostic tool: a way of screening out of "vanity metrics." The metrics you end up with, and the process of experimentation, answers the question "how do we optimize our strategy to build success?"