ESTRO Mobility Grant (TTG) Report
FAIR Quantitative Imaging Infrastructure for Deep Neural Networks - PDF Version
Date of visit: 27 – 28 January 2019
Host institute: Dana-Farber Cancer Institute, Harvard Medical School, Boston, USA
Quantitative, or artificial intelligence (AI)-assisted, prediction of local control and survival from pre-treatment radiological imaging has hitherto untapped potential to guide clinical risk estimation in this regard. Deep artificial neural networks (DNNs)  have been applied to the problem of predicting long-term outcomes using a combination of clinical and imaging data. The ground-breaking capabilities of DNNs to classify patterns are acquired through its self-evolving learning strategy using vast volumes of data, and these have shown near-expert levels of performance.
Aim of the Visit
The primary aim of the visit was to define a systematic methodology to make vast amounts of radiology images and tumour delineations, treatments and outcomes Findable, Accessible, Interoperable and Reusable (FAIR)  for the purpose of training generic DNNs.
Materials and Method
We developed a FAIR quantitative imaging analysis workflow (FAIR-QIAW) that could convert digital imaging and communications in medicine (DICOM) imaging data to FAIR quantitative imaging data. In total, 612 patients from four cohorts that were available on XNAT (https://xnat.bmia.nl) were used in the study (Figure 1).
The proposed FAIR-QIAW program automatically processed the data of all 612 patients. The results for each patient consisted of deep learning-based features, radiomic features, a DICOM-SEG object and a DICOM-SR object. The DICOM-SEG files of several datasets that were used are now available on The Cancer Imaging Archive (TCIA) (https://www.cancerimagingarchive.net). These are: the reference image database to evaluate therapy response (RIDER); the LUNG1 dataset of 422 non-small-cell lung cancer (NSCLC) patients; the Interobserver dataset also of NSCLC patients, and head and neck 1 (HN1).
We have developed a workflow to generate FAIR imaging data automatically directly from DICOM data. The future work will mainly involve application of the proposed workflow to perform a real study, for instance development of a lung organ segmentation model via federated deep learning on DICOM-SEG generated by FAIR-QIAW.
Figure 1: The diagram of data used in this study. All data (RIDER, LUNG1, Interobserver, and HN1) are available on XNAT, the open source project at the Washington University School of Medicine.
Figure 2: Four levels of the processing in FAIR-QIAW.
- LeCun, Y., Y. Bengio, and G. Hinton, Deep learning. nature, 2015. 521(7553): p. 436-444.
- Wilkinson, M.D., et al., The FAIR Guiding Principles for scientific data management and stewardship. Scientific data, 2016. 3.
- Fedorov, A., et al., Standardized representation of the LIDC annotations using DICOM. 2019, PeerJ Preprints.
- Herz, C., et al., DCMQI: an open source library for standardized communication of quantitative image analysis results using DICOM. Cancer research, 2017. 77(21): p. e87-e90.
- Hosny, A., et al., Deep learning for lung cancer prognostication: A retrospective multi-cohort radiomics study. PLoS medicine, 2018. 15(11): p. e1002711.
- Hosny, A., et al., ModelHub. AI: Dissemination Platform for Deep Learning Models. arXiv preprint arXiv:1911.13218, 2019.
- van Griethuysen, J.J., et al., Computational Radiomics System to Decode the Radiographic Phenotype. Cancer research, 2017. 77(21): p. e104-e107.
Department of Radiation Oncology (MAASTRO)
GROW School for Oncology & Developmental Biology
Maastricht University Medical centre
Maastricht, the Netherlands.