Development and validation of a deep-learning system for wisdom tooth removal

Project Description

Unnecessary removal of wisdom teeth is an enormous health and economic problem worldwide. For example, costs associated with wisdom teeth removal exceed $3 billion in the USA every year. Moreover, wisdom teeth surgery causes approximately 11 million patient days of "standard discomfort or disability" in the States. Surprisingly, research studies have shown that more than half of these extractions are unnecessary. Nowadays, the decision flowchart for wisdom tooth removal is based on the experience of the surgeon as well as on considerations of a wide range of risk versus benefit factors, including the anatomy-, general health-, age-, dental status, drug history, other specific patient-, surgeon- and financial related factors. Taking the numerous interactions between all those factors into account, it is very challenging to make the correct decision during an average presurgical consultation.

The goal of this project is to create an AI-driven decision flowchart (AIFC). In a first step, the AIFC will generate a categorial output based upon a 2D panoramic radiograph (OPG) and clinical signs, resulting in 1) advice to remove a wisdom tooth; 2) advice to not remove a wisdom tooth; or 3) recommend additional 3D imaging with a cone-beam CT (CBCT) scan. If the CBCT scan is available, the system will recommend whether or not to remove a wisdom tooth.

The system will be developed, validated, prospectively tested and implemented in close collaboration with AI experts of Radboud AI for Health and clinical experts at Radboudumc.

Results

Automated detection of third molars and mandibular nerve by deep learning

The approximity of the inferior alveolar nerve (IAN) to the roots of lower third molars (M3) is a risk factor for the occurrence of nerve damage and subsequent sensory disturbances of the lower lip and chin following the removal of third molars. To assess this risk, the identification of M3 and IAN on dental panoramic radiographs (OPG) is mandatory. In this study, we developed and validated an automated approach, based on deep-learning, to detect and segment the M3 and IAN on OPGs. As a reference, M3s and IAN were segmented manually on 81 OPGs.

A deep-learning approach based on U-net was applied on the reference data to train the convolutional neural network (CNN) in the detection and segmentation of the M3 and IAN. Subsequently, the trained U-net was applied onto the original OPGs to detect and segment both structures. Dice-coefficients were calculated to quantify the degree of similarity between the manually and automatically segmented M3s and IAN. The mean dice-coefficients for M3s and IAN were 0.947 ± 0.033 and 0.847 ± 0.099, respectively. Deep-learning is an encouraging approach to segment anatomical structures and later on in clinical decision making, though further enhancement of the algorithm is advised to improve the accuracy.

Publication: https://doi.org/10.1038/s41598-019-45487-3

Classification of caries in third molars on panoramic radiographs using deep learning

The objective of this study is to assess the classification accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the classification of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped PR(s). The classification accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.86, a specificity of 0.88 and an AUC of 0.90 for the classification of carious lesions of third molars on PR(s). A high accuracy was achieved in caries classification in third molars based on the MobileNet V2 algorithm as presented. This is beneficial for the further development of a deep-learning based automated third molar removal assessment in future.

Publication: https://doi.org/10.1038/s41598-021-92121-2

Funding

People

Shankeeth Vinayahalingam

Shankeeth Vinayahalingam

PhD Candidate

Diagnostic Image Analysis Group

Thomas Maal

Thomas Maal

Professor

3D Lab, Radboudumc

Tong Xi

Tong Xi

Oral & Maxillofacial surgeon

Oral & Maxillofacial Surgery, Radboudumc

Guido de Jong

Guido de Jong

Research coordinator

3D Lab, Radboudumc

Hossein Ghaeminia

Hossein Ghaeminia

Oral & Maxillofacial surgeon

Rijnstate

Bram van Ginneken

Bram van Ginneken

Professor, Scientific Co-Director

Diagnostic Image Analysis Group

Stefaan Bergé

Stefaan Bergé

Professor

Oral & Maxillofacial Surgery, Radboudumc