I should start by saying that I’m finishing up a PhD in evaluation at the University of Minnesota – which has caused me to scrutinize pretty much any report that reports data. Basically, anytime I read anything I automatically question things. When the 2012 Nonprofit Social Network Benchmark Report (NSNBR) came out I was excited to dive in and read about what’s going on in the sector. The survey seems pretty comprehensive (assuming the appendix is the full and complete survey) – but I immediately had some pretty important questions that impact the quality of the data.
Problem #1: Methodology is too vague
There is only one sentence on the methodology for this 40 page report. It says:
“The respondents were recruited via email to an online survey of 58 questions between January 24, 2012 and February 21, 2012 from a variety of industry email lists yielding 3,522 respondents.”
Most people skip right over this section – but some argue it’s the most important section of a report. If how someone got the data isn’t valid or reliable the whole report is junk. This is important stuff and if you want to cite, use, or pass on report info you better make sure it’s quality. So, what’s wrong with the 2012 NSNBR’s methodology? Maybe nothing. Maybe lots. We don’t know because this section is pretty vague.
The biggest question I have is what “industry email lists” was the survey link sent to? If it was sent to just NTEN‘s list the survey could be completely skewed (NTEN was one of the three organizations that did the survey). NTEN = Nonprofit Technology Network, which means the people on their lists probably are on social media and probably use it a fair amount. They likely are somewhat tech-savvy. Without this information we have no idea whether the survey is representative of nonprofits as a whole. Also, who at organizations completed this survey? Was it the executive director? The communications director? How did they make sure two or more people from the same organization didn’t complete the survey?
Anyone know the answers to my questions?
It would also be nice to know the response rate (percentage of people emailed the survey actually responded to the survey), but they likely didn’t include it because it would be hard for them to calculate how many people actually got the survey since they used email lists.
Read about the other problems with the report and NTEN/Common Knowledge’s response to this series and the issues with the survey:
Not a big deal, but a funny side note is the above graphic is from their website. Note it says 2012 – but says “3rd Annual” – that’s wrong. Once you download it you’ll see the 2012 is actually their 4th Annual report