Abstract Feature attribution methods, proposed recently, help users interpret the predictions of complex models. Our approach integrates feature attributions into the objective function to allow machine learning practitioners to incorporate priors in model building. To demonstrate the effectiveness our technique, we apply it to two tasks: (1) mitigating unintended bias in text classififiers by neutralizing identity terms; (2) improving classififier performance in a scarce data setting by forcing the model to focus on toxic terms. Our approach adds an L2 distance loss between feature attributions and task-specifific prior values to the objective. Our experiments show that i) a classififier trained with our technique reduces undesired model biases without a tradeoff on the original task; ii) incorporating priors helps model performance in scarce data settings