Last week I highlighted a few of the problems with the Nonprofit Social Network Benchmark Reports. Annaliese Hoehling, Publications Director for NTEN, and Jeff Patrick, President of Common Knowledge, were kind enough to answer a few of the questions I had posed about their report. Since I’m still waiting to hear back from Jeff Patrick about a couple things, I’ll share their response to my questions about their methodology today, and the rest of their response later this week
1) What “industry lists” was the survey sent to? Hoehling shared that the survey was sent out to four email lists: Common Knowledge, NTEN, Blackbaud, and Network for Good.
“It’s worth noting that these lists were chosen because they skew in different directions (albeit there is demographic overlap) with Common Knowledge’s list including small, medium and large organizations from a wide variety of verticals and a pretty wide range of technical sophistication and experience with social networking, Blackbaud’s including a really wide range of groups – sizes and sectors, and Network for Good’s list providing especially good reach into smaller organizations who are probably less technically sophisticated although their list is not monolithic along this characteristic either.”
It was great to see the effort they made to try to get a more representative sample. Unfortunately, it’s clear that their efforts were not successful. Hoehling shared that “about 75% of US nonprofits report annual revenue less than $1M” – which is a huge difference from the 40 – 46% of nonprofits with annual budgets less than $1 million that completed their survey . This suggests that nonprofits with large budgets (which are likely more tech-savvy and spend more on social media) were given much more weight in the survey and report. This alone shows that the survey definitely can not be generalized.
2) Is there response rates available?
“We don’t have response rates. We don’t have percentages from each list either (that is, what proportion of responses came from each list) – we created distribution source codes but unfortunately they didn’t get used in all cases, so analysis of this would not be accurate.”
This will be something to watch for 2013. Hopefully they can tweak any issues they experienced with their source code plan so they can provide response rates next year.
3) Who completed the survey? Was it the executive director? The communications director? How did they make sure two or more people from the same organization didn’t complete the survey?
“Unfortunately, we did not ask respondents to indicate their professional role in the survey, which is an oversight and will be added in next year’s report.”
Looks like with this round of the survey there’s no way to know. It’s great that it will be added next year though!
I’m glad they are going to make some changes for next year and hopefully resolve some of these issues. The biggest issue for me is the generalizability. Hopefully they make an attempt to make the survey more representaitve to give us a more accurate picture of how nonprofits are using social media. There are ways to determine whether a survey is representative (and therefore more likely to be generalizable) without having to do a random sample.
“Note that we don’t claim that the survey was conducted rigorously — and therefore we’re careful to refer to “survey respondents” throughout the report, rather than to suggest that the survey results can represent the general practices of the US nonprofit sector.”
Take this survey with a grain of salt. It has some great insights – but they aren’t about the nonprofit sector or nonprofits as a whole. They are about the convenience sample used for the survey.
Photo credit: visual.dichotomy