Incorporating External Knowledge into Crowd
Intelligence for More Specific Knowledge Acquisition
Abstract
Crowdsourcing has been a helpful mechanism to leverage human intelligence to acquire useful knowledge for well defined tasks. However, when aggregating the crowd knowledge based on the currently developed voting algorithms, it often results in common knowledge that may not be expected. In this paper, we consider the problem of collecting as specific as possible knowledge via crowdsourcing. With the help of using external knowledge base such as WordNet, we incorporate the semantic relations between the alternative answers into a probabilistic model to determine which answer is more specific. We formulate the probabilistic model considering both worker’s ability and task’s difficulty, and solve it by expectation-maximization (EM) algorithm. Experimental results show that our approach achieved 35.88% improvement over majority voting when more specific answers are expected.