What Does the Evidence Say?Models to Help Make Sense of the Biomedical Literature
Abstract
Ideally decisions regarding medical treatments
would be informed by the totality of the available
evidence. The best evidence we currently have is
in published natural language articles describing
the conduct and results of clinical trials. Because
these are unstructured, it is difficult for domain experts (e.g., physicians) to sort through and appraise
the evidence pertaining to a given clinical question.
Natural language technologies have the potential to
improve access to the evidence via semi-automated
processing of the biomedical literature. In this brief
paper I highlight work on developing tasks, corpora, and models to support semi-automated evidence retrieval and extraction. The aim is to design
models that can consume articles describing clinical trials and automatically extract from these key
clinical variables and findings, and estimate their
reliability. Completely automating ‘machine reading’ of evidence remains a distant aim given current technologies; the more immediate hope is to
use such technologies to help domain experts access and make sense of unstructured biomedical evidence more efficiently, with the ultimate aim of
improving patient care. Aside from their practical
importance, these tasks pose core NLP challenges
that directly motivate methodological innovation