This is a question that I used to answer numerically. I would explain that the sample group needed to be large enough for the organisation to run split tests and to achieve results that had statistical significance. I was wrong. Here is why….
I have worked with dozens of charities looking to diversify their income streams, test new channels and determine where they should invest their fundraising budgets. They run tests with agencies, on different platforms, using multiple methods and communication styles. I have encouraged charities to try (or retry) email fundraising as I have seen the impressive results that can be achieved through a strong email programme.
Some charities are seeing positive results from email; the likes of PETA, Charity:Water and Dignity in Dying come to mind. Others have run small but statistically significant tests and decided email is not a channel that works for their charity or their target audience. I used to accept that both were right and that it was simply a channel that did not work for everyone. Now I believe that many charities have been analysing the wrong data when determining whether it works for them or not and have dismissed email too soon.
Measuring the ROI of your first test or calculating lifetime value based on your first 5-10 emails does not tell you whether the channel works for your organisation. Instead it tells you how well the charity knows how to communicate via this channel, how effective your donation page is and whether the charity responded to the data obtained from split tests. Most importantly, it gives you baseline data such as email open rates, click through rates and conversion rates so that you know where to focus your efforts in terms of improvement and optimisation.
The key to email fundraising is regularly reviewing how your supporters respond and making small, incremental changes that lead you to achieving your desired result; an action-taking group of supporters.
This method is similar to the method applied by Dave Brailsford when he was hired to ensure that a British cyclist won the Tour de France for the first time. It took him three years to realise this goal and in 2012 Sir Bradley Wiggins became Great Britain’s first Tour de France winner.
Brailsford tested every element that could have an impact on Sir Bradley’s performance and “optimised” everything he could control. He discovered what pillow would support better sleep, which massage gel was the most effective and he made sure that his cyclists always washed their hands to avoid getting sick. Brailsford understood that a 1% increase in performance applied across multiple areas would be the key to success.
With email fundraising there is so much that can be optimised and therefore so much to test. You need to understand how frequently you should communicate, how often you should ask for donations, what other parts of your charity you need to expose to your supporters, how long your emails should be, who they should come from. The list goes on.
Many fundraising teams, especially in larger charities, have lost the ability to think like this. They have become project managers who are excellent at managing agency relationships and analysing ROI. That is why I am asked questions such as, “How many supporters do I need to recruit to test whether email fundraising can work for my organisation?”
Today’s fundraiser has been preceded by decades of failures, learning and optimisation of methods such as direct mail, telemarketing and door to door fundraising. Others before them have done the hard slog and research that we all benefit from. That is why traditional channels “work”.
This is not yet the case for “digital” fundraising and why email fundraising can get bad press. I have heard very senior, well-respected fundraisers claim that, “email fundraising doesn’t work”. I challenge this myth and instead suggest that through investing resource into testing and optimisation it is likely that you can make email fundraising work for your organisation.
Very few organisations will find new channels by running small tests that use short-term ROI as the central measure of success unless you happen to have a strong email programme in place at the start of your test.
I challenge you, next time you try a new channel, try it for at least six months, test everything and remember Dave Brailsford along your journey.