Abstract
Topic models are typically evaluated with respect to the global topic distributions that they
generate, using metrics such as coherence, but
without regard to local (token-level) topic assignments. Token-level assignments are important for downstream tasks such as classi-
fication. Recent models, which claim to improve token-level topic assignments, are only
validated on global metrics. We elicit human
judgments of token-level topic assignments:
over a variety of topic model types and parameters, global metrics agree poorly with human
assignments. Since human evaluation is expensive we propose automated metrics to evaluate topic models at a local level. Finally,
we correlate our proposed metrics with human
judgments: an evaluation based on the percent
of topic switches correlates most strongly with
human judgment of local topic quality. This
new metric, which we call consistency, should
be adopted alongside global metrics such as
topic coherence