Abstract
We describe a method for implementing the evaluation and training of decision trees and forests entirely on a GPU, and show how this method can be used in the context of ob ject recognition. Our strategy for evaluation involves mapping the data structure de- scribing a decision forest to a 2D texture array. We navigate through the forest for each point of the input data in parallel using an efficient, non- branching pixel shader. For training, we compute the responses of the training data to a set of candidate features, and scatter the responses into a suitable histogram using a vertex shader. The histograms thus computed can be used in conjunction with a broad range of tree learning algorithms. We demonstrate results for ob ject recognition which are identical to those obtained on a CPU, obtained in about 1% of the time. To our knowledge, this is the first time a method has been proposed which is capable of evaluating or training decision trees on a GPU. Our method leverages the full parallelism of the GPU. Although we use features common to computer vision to demonstrate ob ject recognition, our framework can accommodate other kinds of fea- tures for more general utility within computer science.