Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 7-Day Trial for You or Your Team.

Learn More →

Self-training with improved regularization for sample-efficient chest x-ray classification

Self-training with improved regularization for sample-efficient chest x-ray classification Automated diagnostic assistants in healthcare necessitate accurate AI models that can be trained with limited labeled data, can cope with severe class imbalances and can support simultaneous prediction of multiple disease conditions. To this end, we present a deep learning framework that utilizes a number of key components to enable robust modeling in such challenging scenarios. Using an important use-case in chest X-ray classification, we provide several key insights on the effective use of data augmentation, self-training via distillation and confidence tempering for small data learning in medical imaging. Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Progress in Biomedical Optics and Imaging - Proceedings of SPIE SPIE

Self-training with improved regularization for sample-efficient chest x-ray classification

Loading next page...
 
/lp/spie/self-training-with-improved-regularization-for-sample-efficient-chest-gP2B1Ve7vJ

References (18)

Publisher
SPIE
Copyright
COPYRIGHT SPIE. Downloading of the abstract is permitted for personal use only.
ISSN
1605-7422
DOI
10.1117/12.2582290
Publisher site
See Article on Publisher Site

Abstract

Automated diagnostic assistants in healthcare necessitate accurate AI models that can be trained with limited labeled data, can cope with severe class imbalances and can support simultaneous prediction of multiple disease conditions. To this end, we present a deep learning framework that utilizes a number of key components to enable robust modeling in such challenging scenarios. Using an important use-case in chest X-ray classification, we provide several key insights on the effective use of data augmentation, self-training via distillation and confidence tempering for small data learning in medical imaging. Our results show that using 85% lesser labeled data, we can build predictive models that match the performance of classifiers trained in a large-scale data setting.

Journal

Progress in Biomedical Optics and Imaging - Proceedings of SPIESPIE

Published: Feb 15, 2021

There are no references for this article.