As I mentioned on Monday, last week I highlighted a few of the problems with the Nonprofit Social Network Benchmark Reports. Annaliese Hoehling, Publications Director for NTEN, and Jeff Patrick, President of Common Knowledge, were kind enough to answer a few of the questions I had posed about their report. I discussed their responses to my questions about the methodology on Monday. Unfortunately, after emailing back and forth a couple times, Patrick has not shared how exactly they conducted the data analysis (one of the questions I had about the report). So, to wrap up my critical look at the 2012 Nonprofit Social Network Benchmark Report, here is the responses to my remaining questions:
The current list of budget categories does not appropriately reflect nonprofits in the U.S. As I discussed earlier this week, Hoehling shared that “about 75% of US nonprofits report annual revenue less than $1M” and over the past 4 years, 40 – 46% of their respondents have been nonprofits with annual budgets less than $1 million. Knowingly this, it should be obvious that the categories for budget should include several options under $1 million (less than $250,000, $250,000-$499,999, $500,000 – $749,999, and $750,000 – $999,999). The current options also have overlap (read my previous post for more discussion on these issues).
In response to this, Hoehling shared:
“We realize that the general nonprofit sector in the US has a make-up different than the budget breakdown we provided (with about 75% of US nonprofits reporting annual revenue less than $1M), which is another reason we make sure that readers understand that the survey respondents don’t necessarily reflect the general nonprofit sector — we’re interested in learning about practices that lead to success for nonprofits advocating and fundraising on social networks, rather than a “state of” social media use in the nonprofit sector.”
Patrick also responded to this question and said:
“That’s always a hot debate for this survey. We address such a wide range of groups that breaking it out into even more tiers creates a really BIG list, which in turn is overwhelming for respondents. Second, for our purposes, I’m not sure there is significant behavioral differences between $250K vs $1million.”
I was very surprised to hear Patrick say he wasn’t sure there would be significant differences between nonprofits under $1 million – I was surprised because I know there are differences, which is why this was a problem in the first place. Here was my response to him:
I understand wanting to avoid more options – but in this case I think you would find it useful. While it’s easy to think there wouldn’t be differences between orgs under $1 million, I have found there is, in fact, notable differences. In the statewide survey I conducted in Minnesota (about 650 respondents) on social media use and evaluation (conducted Fall of 2010), I found several interesting differences between those with budgets of less than $250k and those with $500k – $1 million budgets. Here are just a few of the things there were different about those groups:
- Nonprofits with budgets of under $250k (27.7%) were twice as likely orgs with budgets of $500-$1 mil (12.3%) to have zero staff dedicated to social media.
- Nonprofits with budgets under $250k (32.1%) were much less likely to have “Reallocated money from other part of organization (not programming)” to pay for it’s social media work than $500k-$1 mil orgs (50%)
- More nonprofits with large budgets ($500k – $1 mil, 44.6%) plan to increase spending on social media more frequently than those with under $250k in annual budgets (33.7%)
- There were notable differences in the type of social media used. For example, only 31.3% of under $250k orgs used YouTube – but 55.4% of $500k-$1m orgs used YouTube.
There were several other differences between the groups in terms of their evaluation of their social media efforts. I’m hoping that next year you will break out this category a bit. I’d also recommend adding geographic location as a survey question.
Patrick did not respond any further to this point.
My final problem with the report was that there seemed to be very little data analysis done. All that was really reported was means (very basic data analysis). I expected that they would have done some sort of test for differences (i.e. Chi-Square, etc) similar to the 2010 Nonprofit Fundraising Survey by The Nonprofit Research Collaborative. When I asked Patrick about this, his response was:
“We did a variety of data validation and testing of our data. When you say “Differences in the data” – what exactly are you referencing?”
I responded to him explaining what I meant (Chi-Square, etc) but he has not responded (I have emailed him 3 times about this now). I’m not sure what that means (if anything), so I just hope that next year they include a sentence or two in their report sharing what type of data analysis was done.
Photo Credit: Ali Catterall