Abstract Graph-based methods have been demonstrated as one of the most effective approaches for semi-supervised learning, as they can exploit the connectivity patterns between labeled and unlabeled data samples to improve learning performance. However, existing graph-based methods either are limited in their ability to jointly model graph structures and data features, such as the classical label propagation methods, or require a considerable amount of labeled data for training and validation due to high model complexity, such as the recent neural-network-based methods. In this paper, we address label effificient semi-supervised learning from a graph fifiltering perspective. Specififically, we propose a graph fifiltering framework that injects graph similarity into data features by taking them as signals on the graph and applying a low-pass graph fifilter to extract useful data representations for classifification, where label effificiency can be achieved by conveniently adjusting the strength of the graph fifilter. Interestingly, this framework unififies two seemingly very different methods – label propagation and graph convolutional networks. Revisiting them under the graph fifiltering framework leads to new insights that improve their modeling capabilities and reduce model complexity. Experiments on various semi-supervised classifification tasks on four citation networks and one knowledge graph and one semi-supervised regression task for zero-shot image recognition validate our fifindings and proposals