Abstract Previous work by Talbot and Osborne [2007] explored the use of randomized storage mechanisms in language modeling. These structures trade a small amount of error for signifificant space savings, enabling the use of larger language models on relatively modest hardware. Going beyond space effificient count storage, here we present the Talbot Osborne Morris Bloom (TOMB) Counter, an extended model for performing space effificient counting over streams of fifinite length. Theoretical and experimental results are given, showing the promise of approximate counting over large vocabularies in the context of limited space