I’m working on an reputation analysis of a international training organization who has expressed concerns about their online reputation. After pulling the data from Sysomos MAP and comparing the sentiment score against human scoring – I’ve decided that if you care about Sentiment Accuracy – it’s best to have humans evaluate sentiment.
Also noted that human scoring of documents has an additional advantage that never seems to be spoken about – the more “we” get involved in the output of our monitoring (i.e.: by devising scoring metrics and applying them to the data at hand) the more “engaged” and satisfied, I feel, we will be with the monitoring programs we devise.
I’ve been seeing this more and more, and have taken that to mean that community managers and analysts that are fully “invested” in the data they collect (they touch it, in other words) the more satisfied they are with what they are doing.
And, as a result of doing all this sentiment “scoring” we have more confidence in the results than if we let a machine program do it.