资源论文Strategyproof Classification with Shared Inputs

Strategyproof Classification with Shared Inputs

2019-11-15 | |  66 |   41 |   0

Abstract Strategyproof classifification deals with a setting where a decision-maker must classify a set of input points with binary labels, while minimizing the expected error. The labels of the input points are reported by self-interested agents, who might lie in order to obtain a classififier that more closely matches their own labels, thus creating a bias in the data; this motivates the design of truthful mechanisms that discourage false reports. Previous work [Meir et al., 2008] investigated both decisiontheoretic and learning-theoretic variations of the setting, but only considered classififiers that belong to a degenerate class. In this paper we assume that the agents are interested in a shared set of input points. We show that this plausible assumption leads to powerful results. In particular, we demonstrate that variations of a truthful random dictator mechanism can guarantee approximately optimal outcomes with respect to any class of classififiers

上一篇:Balancing Utility and Deal Probability for Auction-Based Negotiations in Highly Nonlinear Utility Spaces

下一篇:Argumentation System with Changes of an Agent’s Knowledge Base

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...