Overview
Manual diagnosis is a highly time-consuming and costly process that involves both technicians and qualified medical experts. In addition to taking more time and effort, if the doctor needs to review a lot of MRI (Magnetic Resonance Imaging) images quickly, there may be a higher chance of misdiagnosis and medical error due to fatigue and work overload. As a result, techniques for automated identification and diagnosis of glioblastoma tumors process without human intervention have been proposed (Ranjbarzadeh et al. 2021). We developed a software tool that will be responsible for assisting the medical professional in shaping his diagnosis. Our team’s proposed method is expected to contribute to the holistic approach of personalized medical care for GBM patients and achieve high classification accuracy. Its input data is MRI scans and its output is helpful information about the characteristics of the brain tumor. It includes radiomic features for the segmentation of MRI scans and a convolution neural network to filter and identify correlations between the spatial-imaging features of these scans. We strongly believe that better monitoring of the patients could make a difference in both patients’ treatment and lives. Thus, we designed Theriac to enhance the signal of MRI scans. With our software, we aim for a holistic approach to accurate patient monitoring and healthcare.
Introduction
Radiomics
Radiomics is a process, in which using medical images, like MRI scans, through data imaging algorithms, can extract useful information about the patient’s health. The features on medical images, which are called radiomic features, can’t be recognized by the naked eye and are correlated to the assessment of prognosis, and therapeutic response of various cancer types. Features on images based on intensity, shape, size, or volume can lead to information about tumor phenotype and microenvironment. Thus, radiomics are ideal decision-support tools with a lot of applications in oncology (Ranjbarzadeh et al. 2021).
Convolutional Neural Network
Convolutional Neural Network (CNN) is a type of deep neural network; it processes images and extracts useful information from them. CNN employs a technology that is very similar to a multilayer view-point and was created with minimal process requirements. It consists of neurons that connect to each other with parameters, like kernels and filters of layers. The filters referring to image processing, reflect the size of matrices that detect edges in the images. These filters are initialized, and then, according to the training procedure shape filters, to fit for every task. Kernels are also filters which are moving over the input images and executing the convolution process which consists of dot products. In addition, pooling and unpooling methods are used to down-sample and minimize the complexity of layers (Albawi et al. 2017).
Dataset
Given the complexity and heterogeneity of the human brain structure and anatomy, identifying glioblastoma tumors can be challenging. The most used imaging technique for glioblastoma is undoubtedly Magnetic Resonance Imaging (MRI) (Shukla et al.2017; Wei et al. 2021), which is then reviewed by a specialized medical doctor. MRI manifestations of GBM have even been found to correlate with the overall survival of GBM patients (Li et al. 2012; Menze et al. 2015).
The Dataset that has been used, Brats 2021, from the International Multimodal BRAin Tumor Segmentation Competition includes 8,000 multi-parametric magnetic resonance imaging (mpMRI) scans from 2,000 cases, which were collected by The Cancer Imaging Archive (TCIA) glioma collection. In more detail, the dataset consists of native (T1), post-contrast T1-weighted(T1ce), T2-weighted, and T2 Fluid Attenuated Inversion Recovery(T2-Flair) volumes, as they are shown in Figure 1. These data images were obtained by different institutes and scanners with various clinical protocols. The sub-areas inspected for assessment are the enhancing tumor, the tumor core, and the whole tumor.
Figure 1. MRI scans from the dataset used
The data were first co-registered and skull-stripped before an automated hybrid generative-discriminative technique. Imaging variables, like intensity, volumetric, morphologic, histogram-based, and textural factors, as well as spatial data and diffusion properties extrapolated from glioma development models, were retrieved from the final labels. The resulting computer-aided and humanly corrected labels allow for comparison across studies and enable quantitative computational and clinical investigations without the requirement for repetitive manual annotations (Bakas et al. 2017).
The segmentation labels, radiomic features for MRI scans, and glioma sub-regions labels were identified by an automated state-of-the-art method and then updated by expert neuroradiologists. For the segmentation of the dataset were used different algorithms to conclude the most appropriate one. Although, it was found that different algorithms fitted best for different sub-regions. Combining some of them in a hierarchical majority leads to much better results than using individual algorithms (Menze et al. 2015). The segmentation labels are 1 for the Necrotic tumor core, 2 for peritumoral edematous/invaded tissue, 4 for GD-enhancing tumor, and 0 for other characteristics. We compare the volumes to each other, extracting the information about the segmentation labels. Especially, hyperintensity between T1Gd and T1 relates to enhancing tumor (ET) and hypointensity to the necrotic core. Respectively, the hyperintensity of Flair volumes relates to peritumoral edematous/invaded tissue (ED) (Bakas et al. 2017).
Structure
Our algorithm consists of the pre-processing method and the Convolutional Neural Network model (CNN model).
Pre-processing Method
The state of the art on any dataset is achieved through the use of preprocessing techniques. The success of image processing at a higher level is determined by the picture segmentation process, which is a vital and integral phase. In this instance, our primary attention has been on segmenting the brain tumor from the MRI scans. Consequently, this procedure makes it easier for medical personnel to locate the tumor in the brain (Chattopadhyay et al. 2022).
CNN model
We used 3 pooling and 3 unpooling layers of a CNN with different filters. In addition, we integrated a hyperparameter tuning code, Gridsearch. This is a cross-validation process, which estimates and selects the best values of parameters for the model. Also, we selected the Adam Optimization algorithm which is a stochastic gradient method that relied on adaptive estimation of first-order and second-order moments.
Figure 2. CNN structure
We want to thank Abdullah Al Munem, Programmer and Undergraduate student at East West University, as his notebook using the same dataset on Kaggle helped us a lot to understand the structure and set up our algorithm based on his work.
Results
After evaluating different numbers of epochs, embedding sizes, and kernels the model has reached an accuracy of 99.58%, a precision of 99.58%, a sensitivity of 99.45%, and a specificity of 99.86% (Figure 3). Figure 4 displays the training and validation accuracy in correlation to the number of epochs, and Figure 5 displays the corresponding loss. If you want to test our software tool, is available on our GitHub
Figure 3. MRI scans with the segmentation labels
Figure 4. Training and validation accuracy
Figure 5. Training and validation accuracy
Discussion
The output from the execution of the neural network is very promising. Although we have to consider future difficulties that may appear. This software tool is modeled in an exact dataset, that has already been annotated. To use this software tool on-site, some improvements need to be made. For example, different types of scanners may lead to different results and it would be checked in a dataset that has not been annotated. So, it is characterized by the restriction that is not universal but specialized to the type of dataset that has learned to. That is something that we wanted to check, but we had the time limitation of the competition.
Our software and Theriac
Our ultimate goal is for this algorithm to be used in combination with Theriac, as a support tool for more accurate monitoring of the patients. The existing MRI scan databases consist of imaging with possible pseudoprogression issues. Thus, in the future, we should also take this into consideration as it might affect the accuracy of our algorithm. Theriac is designed to avoid causing pseudoprogression images in MRI scans thus an adaptation filter might be needed for better training of our algorithm. Moreover, Theriac could detect metastasized cancer cells distant from the main tumour and enhance their signal in MRI scans; that should also be considered. Small secondary tumour loci that might look like infarcts, strokes, or noise should also be considered in the algorithm training, too.
References
Albawi, S., Mohammed, T. A., & Al-Zawi, S. (2017). Understanding of a convolutional neural network. 2017
International Conference on Engineering and Technology (ICET).
https://doi.org/10.1109/icengtechnol.2017.8308186
Baid, U., Ghodasara, S., Mohan, S., Bilello, M., Calabrese, E., Colak, E., Farahani, K., Kalpathy-Cramer, J., Kitamura, F. C., Pati, S., Prevedello, L. M., Rudie, J. D., Sako, C., Shinohara, R. T., Bergquist, T., Chai, R., Eddy, J., Elliott, J., Reade, W., … Bakas, S. (2021, September 12). The RSNA-ASNR-MICCAI brats 2021 benchmark on brain tumor segmentation and Radiogenomic Classification. arXiv.org. Retrieved October 9, 2022,
https://arxiv.org/abs/2107.02314
Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J. S., Freymann, J. B., Farahani, K., & Davatzikos, C. (2017). Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features.
Scientific data,4, 170117.
https://doi.org/10.1038/sdata.2017.117
Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J., Freymann, J., Farahani, K., Davatzikos, C., (2017). Segmentation Labels for the Pre-operative Scans of the TCGA-GBM collection [Data set].
The Cancer Imaging Archive.
DOI: 10.7937/K9/TCIA.2017.KLXWJJ1Q \
Bakas, S., Akbari, H., Sotiras, A., Bilello, M., Rozycki, M., Kirby, J., Freymann, J., Farahani, K., Davatzikos, C., (2017) Segmentation Labels and Radiomic Features for the Pre-operative Scans of the TCGA-LGG collection [Data Set]. The Cancer Imaging Archive.
DOI: 10.7937/K9/TCIA.2017.GJQ7R0EF
Chattopadhyay, A., & Maitra, M. (2022). MRI-based brain tumour image detection using CNN based Deep Learning Method.
Neuroscience Informatics,2(4), 100060.
https://doi.org/10.1016/j.neuri.2022.100060
Li, W. B., Tang, K., Chen, Q., Li, S., Qiu, X. G., Li, S. W., & Jiang, T. (2012). MRI manifestions correlate with survival of glioblastoma multiforme patients.
Cancer biology & medicine,9(2), 120–123.
https://doi.org/10.3969/j.issn.2095-3941.2012.02.007
Menze, B. H., Jakab, A., Bauer, S., Kalpathy-Cramer, J., Farahani, K., Kirby, J., Burren, Y., Porz, N., Slotboom, J., Wiest, R., Lanczi, L., Gerstner, E., Weber, M. A., Arbel, T., Avants, B. B., Ayache, N., Buendia, P., Collins, D. L., Cordier, N., Corso, J. J., … Van Leemput, K. (2015). The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).
IEEE transactions on medical imaging,34(10), 1993–2024.
https://doi.org/10.1109/TMI.2014.2377694
Ranjbarzadeh, R., Bagherian Kasgari, A., Jafarzadeh Ghoushchi, S., Anari, S., Naseri, M., & Bendechache, M. (2021). Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi-modalities brain images.
Scientific reports, 11(1), 10930.
https://doi.org/10.1038/s41598-021-90428-8
Shukla, G., Alexander, G. S., Bakas, S., Nikam, R., Talekar, K., Palmer, J. D., & Shi, W. (2017). Advanced magnetic resonance imaging in glioblastoma: a review.
Chinese clinical oncology, 6(4), 40.
https://doi.org/10.21037/cco.2017.06.28
Wei, R. L., & Wei, X. T. (2021). Advanced Diagnosis of Glioma by Using Emerging Magnetic Resonance Sequences.
Frontiers in oncology,11, 694498.
https://doi.org/10.3389/fonc.2021.694498