Close

Quality of sentiment analysis

Posted on by Brandon Klein

How good is the state-of-the-art in sentiment detection? If you look at scientific litera- ture, there exist numerous approaches to the topic and many of them have been proven in experiments to perform very well, both in precision and recall. For instance, basic text-based sentiment detection seems to be “solved”, in the sense that precision and recall of current algorithms are typically above 80% [14, 22]. On the other hand, if one looks at real-world applications that use or include sentiment detection, the picture changes dramatically. In fact, there exist various blog posts on the web that state something like this: “More often than not, a positive comment will be classified as negative or vice-versa” [16]. Is there really such a large gap between research and real-life systems?
In this paper, we will tackle this question by evaluating the performance of several commercial sentiment detection tools. More precisely, we will explore how good existing tools perform on different sentence-based test corpora. This will allow us to identify the potential for improvements, and to indicate relevant directions for future research on sentiment detection. We then combine all tools using machine learning techniques (Random Forest) to unleash a hidden portion of the commercial land- scape’s potential.