Copenhagen, Denmark
Onsite/Online

ESTRO 2022

Session Item

Implementation of new technology and techniques
7002
Poster (digital)
Physics
Clinical implementation of AI-based contouring workflows from commissioning to automated routine QA
Christian Guthier, USA
PO-1636

Abstract

Clinical implementation of AI-based contouring workflows from commissioning to automated routine QA
Authors:

Christian Guthier1, Roman Zeleznik1, Danielle Bitterman1, Hugo Aerts1, Jeremy Bredfeldt1, Raymond Mak1

1Brigham and Women's Hospital, Dana-Farber Cancer Institute, and Harvard Medical School, Department of Radiation Oncology, Boston, USA

Show Affiliations
Purpose or Objective

The purpose of this work is the clinical implementation of artificial intelligence (AI) based auto-contouring. This includes the routine workflows as well as initial commissioning and quality assurance (QA) for both in-house and commercial auto-contouring toolsIn addition, the In-house algorithms should seamlessly be integrated with the treatment planning system (TPS). 

Material and Methods

The clinical implementation is two-fold. First, for our in-house developed models an interface between our TPS and the servers that run the auto-segmentation needs to be developed. The interface consists of an export/import function that sends an anonymized pixel stream to the server and imports 3D coordinates of the segmentation back into the TPS. In addition, the export function will also perform safety checks that prevents the transfer of erroneous data. The communication is designed to provide fast and secure data transfer. Second, development and implementation of AI commissioning and routine QA tools. Commissioning is focused to test the communication, validate the AI performance for geometric and dosimetric accuracy, and test failure modes. For automated daily and monthly QA two tools are developed. Those tools run as scheduled tasks and inform the administrators via automated emails of the status of the system. Daily QA is focused on the communication of the server and model performance while monthly QA checks daily QAs and safety features. A logfile is generated to keep track of usage, transfer and segmentations times, as well as error flags.  

Results

The interface and QA tools were commissioned and clinically implemented (Fig. 1). Total segmentation time is found to be (48.7±5.3)s. The quality of the segmentation was tested and reported on a large patient dataset (>6000 patients). From those patients twenty patients were randomly selected to form a baseline set. All future QAs and updates will be tested against this dataset. Communication of the system is tested daily. One dataset is sent to the server to test communication in the morning. A binary comparison of the return segmentation with the baseline verifies that the AI was not changedMonthly QA includes analysis of logfile and common failure modes, i.e.  known scenarios where segmentation will fail. This ensures that the error handling works as expected. Running the system for three months indicated one daily QA failure where the server underwent maintenance. The monthly QA detected this failure as well.

 

ShapeShape 

 

Fig1. Clinical workflow (top left), Seamless integration into Eclipse via ESAP scripts (top right), final heart segmentation (bottom). 



Conclusion

This work presents the clinical implementation of an in-house developed heart segmentation tool. Automated QA procedures were developed and are now in place to ensure a safe operation. Commissioning strategy, daily, and monthly QAs are now applied to a commercial auto-contouring solution.