Abstract We propose a general approach to modeling semisupervised learning (SSL) algorithms. Specififically, we present a declarative language for modeling both traditional supervised classifification tasks and many SSL heuristics, including both well-known heuristics such as co-training and novel domainspecifific heuristics. In addition to representing individual SSL heuristics, we show that multiple heuristics can be automatically combined using Bayesian optimization methods. We experiment with two classes of tasks, link-based text classififi- cation and relation extraction. We show modest improvements on well-studied link-based classififi- cation benchmarks, and state-of-the-art results on relation-extraction tasks for two realistic domains