Rutwik Patel, Ninad Mehendale 13 Deep learning on medical images to combat a pandemic Abstract: Humanity is suffering immensely due to the coronavirus disease (COVID-19) pandemic. Learning from all the previous pandemics, it is understood that there is a need for faster patient diagnosis in times like these. Owing to scarce human resources, we need to utilize artificial intelligence (AI) to develop quicker and more cost-effective tools. Individuals who feel they are experiencing symptoms or feel that they might have been in contact with a patient should get tested. But not everyone who undergoes the test is infected. Thus, preprocessing medical images using deep learning algorithms classify patients into different categories and thus helps in detecting infected patients. The common medical images used for diagnosis are X-rays, CT (computer tomography), MRI (magnetic resonance imaging), and ultrasound. There are significant differences between these medical images of an infected and noninfected person. Radiologists must study these images carefully and perform various calculations for an accurate di- agnosis. AI can be used to automate some analysis reducing human error, leading to faster results. Such solutions can predict if a patient must be transferred to the inten- sive care unit, which helps decrease the burden on hospital resources. Through the use of AI, a large number of medical images can be analyzed quickly. This helps radiolog- ists to smoothly treat and diagnose their patients even during a pandemic without any interruptions. In this chapter, we go through the different imaging techniques used for diagnosis and how AI models can be built for each of them. We also analyze distinct possibilities and ideas for the classification of images. Additionally, we propose two models that can accurately identify COVID-infected patients from CT scans and X-rays. Keywords: artificial intelligence, medical images, pandemic, radiological diagnosis, precision medicine, classification Rutwik Patel, K. J. Somaiya College of Engineering, Somaiya Vidyavihar University, Mumbai, India 400077, e-mail: rutwik.patel@somaiya.edu Ninad Mehendale, K. J. Somaiya College of Engineering, Somaiya Vidyavihar University, Mumbai, India 400077, e-mail: ninad@somaiya.edu https://doi.org/10.1515/9783110712254-013 232 Rutwik Patel, Ninad Mehendale 13.1 Introduction Coronavirus disease (COVID)-19 has been on a rise, resulting in several people being infected [1]. Recently, deep learning (DL)-based technologies have been demonstrated to be an efficacious solution for several problems. DL has showcased how it could help humans in dealing with this pandemic. Figure 13.1: Deep learning is a part of machine learning, which comes under the umbrella of artificial intelligence. This shows us that artificial intelligence is quite a broad field with various other fields included in it [2]. COVID-19 is a deadly disease that spreads through a coronavirus by droplets. Once it enters a person’s body, the respiratory system is affected. Analysis carried out by M. Rubiayat et al. [3] explains the various aspects of COVID-19 and discusses the probable opportunities for analysis by data analytics in this contagious disease. John Hopkins University’s dataset was employed to analyze this study. A similar study was carried out by F. Khanam et al. [4], which used freely available datasets in order to throw some light on the various symptoms of COVID-19 and how it is different than the other global respiratory diseases. The X-ray image of the person’s chest helps in identifying the underlying area affected and the intensity of infection. This is possible because of the advancement in DL technology. Various medical im- AU: The cross refer- aging methods can be used to diagnose diseases. As shown in Figure 13.1, artificial ences to figures, ta- bles and equations intelligence (AI) comprises machine learning (ML) [5] as one of its subsets where are highlighted for both of them might seem to be almost similar but both are two different concepts. the author/editor to Further, ML comprises DL as a much more specific part of it. DL comprises neural check and confirm its correct place- networks that have been a boon for this decade due to a wide variety of its applica- ment. These high- tions and success in solving major problems that could not be rectified by ML or AI lights will be removed in the [2]. DL models have demonstrated to be very useful in dealing with images and pre- next stage. Please dicting the accurate outcome based on the level of training dataset provided to make changes if necessary. them. A well-built, accurate DL model can be built and deployed if enough labeled images are provided. A diverse training data, which varies throughout parameters such as age, gender, and various image classes, is required to train the model. 13 Deep learning on medical images to combat a pandemic 233 These models can also assist in finding the severity of the disease. Extensive research is being conducted throughout the world to develop well-performing, accurate solu- tions to combat the pandemic. In this chapter, we discuss and compare various DL- based tools created to detect COVID-19 and other disorders from a variety of medical images. We also propose two models that are designed to detect COVID-19 infections from the computer tomography (CT) scans and X-rays of patients. 13.1.1 Medical images Medical image analysis is a significant part of treatment for any disease. Vital conclu- sions can be drawn from these images as they show tumors, brain injuries, fractures, and so on. These aid doctors in taking important diagnostic decisions. As mentioned earlier, the most common medical images are X-rays, CT scans, MRIs (magnetic reso- nance imaging), and ultrasound. X-ray images: X-ray is an electromagnetic radiation that can see through the skin and show the images of the bones underneath. Wilhelm Roentgen discovered X-rays. It was a discovery that happened by accident when Roentgen was trying to pass cath- ode rays through some glass sheets. He observed that when a high voltage tube of cathode rays was kept in the vicinity of crystals, the crystals demonstrated an illumi- nating glow. This phenomenon was exhibited even when the crystals were covered with a dark sheet. Experimental results proved that this radiation could penetrate through the skin and muscles but not through bones. Thus, it can be used to produce shadow images on photographic plates. X-rays have now become one of the primary diagnostic tests for several conditions. Some issues brought to light by X-rays are fractures, tumors, injuries, lung problems, and deformities. Thus, an X-ray can fore- shadow many serious conditions, enabling doctors to provide adequate treatment to get the disease under control. Since X-rays do not show soft tissues, problems in soft tissue body parts such as the kidney and intestines must be detected using some other medical imaging technique. CT: Computers and rotating X-ray machines are used in CT scan equipment to build cross-sectional images of various body parts. CT scans reveal much more informa- tion than an X-ray as they highlight soft tissues, bones, blood vessels, and various other body parts. Through the use of a CT scan, we can visualize various parts of the body such as the heart and chest. This aids the doctor in diagnosing infections, muscle disorders, and bone fractures, and in studying blood vessels. It can also find out the presence and extent of internal injuries or internal bleeding. CT scans have certain disadvantages as well. CT scans expose a person to greater amounts of radi- ation than a normal X-ray, and chances of cancer due to radiation increases with the number of scans performed. The chances of cancer due to radiation is higher in children, especially in the region of their chest and abdomen. In the case of pregnant 234 Rutwik Patel, Ninad Mehendale women, it is better to use a different imaging technique to eliminate the risk due to radiation. Thus, we can conclude that the CT scan is an extremely useful technique if used occasionally and cautiously. MRI: In this technique of medical imaging, radio waves, and strong magnetic fields are used in generating images of different body organs. There are several reasons why one might need to get an MRI done. It is used to understand the staging of cancer, to diagnose a disease or an injury, or to understand how well a patient responds to the treatment being given. It is used for looking at soft tissues and the nervous system and can be used to check the health of various organs. An MRI of the brain shows blood clots, cancer and brain stroke. An MRI of the blood vessels and heart shows clogged blood vessels, any damage caused due to a heart attack, irregularities in the shape of the heart or heart disease. Bone and joint MRIs look for joint damage, bone infections, disk problems in the spine, and so on. With all these benefits come a few drawbacks. An MRI needs the patient to stay still in a closed machine. This may cause issues for patients with claustrophobia. Since the magnetic field in the MRI machine can attract metal, any undetected metal implants might cause problems during the test. MRIs are also very expensive and time-consuming. Another disadvantage of an MRI is that the radiofrequency energy used can lead to heating of the body, which results in problems such as increased irritability, profuse sweating and dry skin. Ultrasound: It is an imaging technique used in medical diagnosis. An image is de- veloped using high-frequency sound waves or echoes. The image produced by ultra- sound is known as a sonogram. Ultrasound does not use radiation and usually does not require any special preparation, thereby making it a safe and smooth process. Thus, it is a commonly used process for treatment and diagnosis. Sonograms can help in detecting problems in the liver, kidney, heart, or abdomen, and assist in per- forming certain types of biopsies. They are often used to evaluate fetal development. An ultrasound scan can help classify a lump as a tumor. They also examine internal organs such as the pancreas, thyroid glands, and soft tissues, muscles, blood ves- sels, tendons, and joints. The ultrasound in the gallbladder echoes only if there are gallstones in the gallbladder. The denser the object hit by ultrasound, the more is the echo due to that object. The different shades of gray in the sonogram represent different echoes, therefore, different densities. Just like any other imaging tech- nique, ultrasound also comes with its own set of disadvantages. In certain cases, such as the gallbladder or liver being infected, the patient may need to fast for sev- eral hours before the procedure. In case of an examination of some parts of the di- gestive system, an internal ultrasound might have to be performed. This carries a slight risk of internal bleeding. In Figure 13.2, with the help of the segmentation pro- cess, the region that is of importance in the case of a sonography image can be found out. The red rectangle indicates the positive patches or the region of interest (ROI), which can be separated from the adjoining areas that may be treated as nega- tive patches [6]. 13 Deep learning on medical images to combat a pandemic 235 Figure 13.2: It can be observed that sample-generated sonographs are segmented into positive and negative patches, which help in enhancing the major region of focus which are the positive patches [6]. 13.1.2 Issues during pandemics According to the World Health Organization, a pandemic is defined as the global spread of disease. Since the disease mentioned is new, there is no prescribed method of treatment, built-up immunity or vaccine. More research and study have to be done to find the treatment for control of any virus outbreak during a pandemic. Lack of knowledge about the pathogen or virus causing the disease will increase the time for preparing the vaccine. If the ailment is contagious, it can infect a large number of people within a short time. This can lead to a high mortality rate. A pandemic can be a trying time for the economy, causing short- as well as long-term damages. It also causes an aversion to crowded places. Countries that do not have a strong healthcare system can face several problems. Since hospital resources remain limited, an in- crease in the number of patients puts a strain on medical systems. During pandemics, there is a dearth of medical resources and clinical equipment, like sanitizers, surgical masks, and syringes. The case is comparatively much more worse for specialized medical equipment and services like intensive care units, ventilators, and extracorpo- real machine oxygenation. If there are more patients than it is possible to take in by the hospitals, the allo- cation of resources must be done based on the severity and amount of medical at- tention required. The difference of opinion in handling such matters can induce political tension and instability. Due to the high number of cases, there is usually a delay in receiving the results of tests performed, which can be extremely harmful in some cases. The prices for these tests are usually steep, which makes them unaf- fordable for a huge chunk of the population. All countries need to have a good amount of test kits to understand the spread of the disease. Pandemics disrupt edu- cational systems all over the world due to the total shutdown of universities and schools. Closure of schools is a problem that has long-term economic and societal consequences. This leads to various other issues like student debt and exhaustion of Internet data. Certain households with a relatively lower income might struggle in paying their institution’s educational fees. 236 Rutwik Patel, Ninad Mehendale 13.1.3 Deep learning techniques There has been extensive research in using AI for medical image diagnosis which resulted in the development of several AI techniques and the rise of different neural networks to perform classification tasks accurately. Lung abnormalities can be de- tected using DL on CT scans and X-rays of the lungs and the chest area. Lung infec- tions, especially pneumonia, can have serious consequences. Manual diagnosis of pneumonia can be time-consuming and subjective, thereby making ML a suitable alternative. Bhandary et al. proposed different DL methods, which include a modi- fied version of AlexNet, using classification, brought about by support vector ma- chine (SVM). Principal component analysis and serial fusion were used for further feature selection. An accuracy of 86.47% was achieved using the proposed modified AlexNet (MAN) with the SVM classifier. The same arrangement, when combined with ensemble feature technique, resulted in an accuracy higher than 97.27%. This proved that the MAN framework can work well on image datasets [7]. The method of medical imaging for COVID-19 included acquiring the image, segmenting its parts, diagnosis, and follow-up [8]. A mobile CT platform was designed using AI. The mo- bile platform was converted into a completely separated control room and scan area. Technicians could monitor patients through a window and an AI camera can be used to correct the posture of the person [8]. U-net can be used to segment CT images. It is difficult to diagnose the affected regions in the lungs just by using CT images. An additional computer-aided diagnosis (CAD) is required to mark the re- gion affected. With the use of U-net, blood vessels and nodules can be segmented using the CT scans of patients with abnormal lung parenchyma. Lung tissue from the bronchus region is often confused with lung parenchyma, so there is a need to separate the two. The model was based on Keras and was supported with a Tensor- flow backend. Emad et al. [9] and Skourt et al. [10] explained how MRI images can be used for localization of the left ventricle using DL. Convolutional neural networks (CNN) were used for localization purposes. A technique called the pyramid of scales method was used for localization of the heart. This technique helped, since the size of the heart can vary with different images. MRI volumes of 19 patients were used to compile the database. About 8.9 min per epoch was the learning time of the net- work, and the network was trained for 10 epochs [9]. Ravishankar et al. used trans- fer learning enabling a CNN, trained on ImageNet for image classification to detect kidney-related problems using ultrasound images. The detection relied on the ex- tent of the transfer. Using transfer learning and CNN, a 20% increase in performance was achieved. The development of intermediate response images from the network was investigated and then these responses were compared to most recently devel- oped image processing filters to obtain higher information into how transfer learn- ing can efficiently manage widely varying imaging regimes [6]. In Figure 13.3, it is visible that the images are not of optimum quality and have irregularities. For exam- ple, the bottom-left image is tilted toward the right. Likewise, the top-left image is 13 Deep learning on medical images to combat a pandemic 237 blurred. Despite this, localization is successfully achieved. The accurate results proved that the model is robust. The position of the left ventricle is denoted with the help of the red square in the figures [9]. Figure 13.3: It is seen that images that had irregularities and improper resolution and contrast were also successfully processed in localizing the heart position indicated by the red squares in each of the images [9]. 13.1.4 Different neural networks Medical imaging has led to a great evolution in medicine. With the rise of neural networks in the past 20 years, there have been great advancements in various fields including medicine. Neural networks are used in various topics of medical imaging such as image segmentation, object detection, and image denoising. Artificial neu- ral networks (ANNs) were used earlier for mammography, ultrasound, thermal im- aging, MRI, and various other tasks in the medical field. In 1999, ANNs were used to classify breast cancer as malignant or benign. For thermal imaging, Lagrange constraint neural networks were used to classify multispectral infrared images in 2003. In the 2012 edition of the ImageNet Large-Scale Visual Recognition Chal- lenge (ILSVRC), CNNs halved the error rate of the second best solution. This led to CNNs being used for various classification tasks. As of now, CNNs have outper- formed humans on ILSVRC, solving the task completely. This led to various archi- tectures that were based on CNNs like ResNets, DenseNets, and VGG. After the success of CNNs for classification tasks, InceptionNet [11] was introduced by Goo- gle as an extension to Google net. This network was able to capture small as well as big features due to its various kernels that were used in parallel. Since the network 238 Rutwik Patel, Ninad Mehendale is very deep, there was a chance of vanishing gradients, thus auxiliary convolutions were introduced which helped in the vanishing gradient problem. Earlier, all models were focused on using bigger kernels to capture features, but VGG (Visual Geometry Group) [12] was the network that consistently used 3 × 3 ker- nels in its architecture. The main motivation for this was to increase the nonlinear- ity in the model. It was understood that stacking more 3 × 3 kernels would introduce more nonlinearity than using bigger kernels. Various model types are available de- pending on the number of layers like VGG13, VGG16, and VGG19. The deeper a net- work gets, the more the problem of vanishing gradients. ResNets [13] was created to solve this problem. ResNet models have a very deep architecture but with skip con- nections. These skip connections permit the network to propagate the gradients deeper into the network, hence avoiding the problem of vanishing gradients. Vari- ous implementations of ResNet like ResNet34 and ResNet50 are available. Another neural architecture that is widely used is DenseNet [14]. This model has a dense ar- chitecture, which means that the first layer passes on the data to the next layer, which subsequently passes on the processed data to the next layer, and so on until the final output layer is reached. Although the DenseNet has a very deep architec- ture as compared to other models, the training time is very less and is at par with lower level models. DenseNet models offer feature reuse since inputs are passed from one layer to all other subsequent layers and, hence, this leads to lesser param- eters in the network. The flow of information is very efficient, thereby reducing the training time of the network. In Figure 13.4, the neural network comprises various layers as well as functions that define how efficiently the neural network learns based on the loss function and the weights of each input layer. The neural network processes large datasets in the form of batches and with each iteration; it updates Figure 13.4: Neural network consists of neurons that perform a defined function based on the input provided to each neuron and then the output is fed to other neurons moving further in the AU: Please clarify network creating a web of interconnected neurons in each stage (Liu et al., 2018). whether the citation “Liu et al. 2018” re- fers to Ref. [42] or [44] in Fig. 13.4. 13 Deep learning on medical images to combat a pandemic 239 the new values of loss function and error. An efficacious neural network aims to minimize the error and accurately predict the required output. Based on the config- uration of the neural network, the loss function is chosen, and the behavior of the neural network is selected. Neural networks need to be trained on the dataset before deploying it for the actual purpose (Liu et al., 2018). AU: Please clarify whether the citation “Liu et al 2018” in the sentence “Neural networks need. . .” 13.2 Literature review refers to Ref. [42] or [44]. 13.2.1 Classification of X-rays With the rise of DL solutions in the past, assessment, and analysis of X-rays have become easier and faster along with better results. A study [15] outlines CheXNet, a neural network that is used in pneumonia detection using DL on chest X-rays. The results obtained by this study achieve relatively higher accuracy since the neural network consists of many layers. It has been proved to be effective on around 14 disease detection by deploying their model on chest X-rays. nCOVnet [16], which is DL, fast screening approach, identifies COVID-19 by examining the X-rays of pa- tients. Patients are categorized as “COVID-19 positive” patients and “COVID-19 neg- ative” patients with the confidence of 97.97%. The nCOVnet model is a potential replacement for real-time polymerase chain reaction (RT-PCR) as it takes approxi- mately 6 h for identifying COVID-19 patients. A study [17] aims at using DL methods for the detection of pneumonia in chest X-ray images. The model used is built on mask RCNN, which is a neural network that includes local and global features that AU: Please provide expansion for are used for pixel-wise segmentation. The dimension of the lungs plays an impor- RCNN, CCT, RNN, tant role in the performance of the model. The larger the image, more will be the SERMO, if applicable. information obtained. Large images can burden computational cost. Applying image augmentation, dropout and L2 regularization prevent overfitting but result in poor outcomes on the training set concerning the test. Another study [1] proposes a DL- based hybrid model that combines CNNs with spatial transformer network to detect lung diseases. A unique model for the detection of pneumonia through X-rays of the chest region was proposed by Vikash et al. [18]. In this model, features of the input images were procured through the use of various neural networks that were trained on ImageNet. The final classification prediction is provided by a classifier that was fed the extracted features of the input images. DL can be used to detect tuberculosis (TB) in chest radiographs. A comparison between the efficiencies of CNNs based on images only (I-CNN) and CNNs including variables (D-CNN) has been performed [19]. About 1,000 positive and negative chest X-ray images of TB were used to train the D-CNN and I-CNN models. InceptionV3, DenseNet121, InceptionResNetV2, VGG19, and ResNet50 were used for feature extrac- tion. For the same cutoff point for the same specificity of 0.962, the D-CNN models 240 Rutwik Patel, Ninad Mehendale were more sensitive than the I-CNN models. Therefore, ML can aid the recognition of TB in chest X-rays, and demographic factors help improve the network [19]. A human coronavirus spike protein and its visualization are shown in Figure 13.5. The parts of the structure are, namely, nucleocapsid (N), spike (s), RNA viral genome, membrane (M), and envelope (E). The nucleocapsid of the coronavirus is a multifunctional protein. The membrane along with the envelope E is present in the budding com- partment of the host cell [16]. In Figure 13.6, the ROI in a chest X-ray can be found out by the segmentation process. The input given is a chest X-ray image, whereas the second figure shows the area to be segmented in the black color. After applying the segmentation procedure, we get the final segmented image showing the ROI for the radiologists [19]. Figure 13.5: A human coronavirus spike protein and its visualization alongside. The parts of the structure are, namely, nucleocapsid (N), spike (s), RNA viral genome, membrane (M), and envelope (E) [16]. Figure 13.6: The mask of the area to be segmented is created on the chest X-ray image to focus on the area of interest which is then made possible to form a segmented image from the original X-ray [19]. The authors have proposed a generative adversarial network (GAN) for the detection of COVID-19 on X-rays. Since the datasets available are small in size, they have used a GAN to generate more images, thereby increasing its size. Initially, the dataset had 307 images collected by the authors. With the help of GAN, they increased the dataset size 13 Deep learning on medical images to combat a pandemic 241 to 8,100. For the transfer learning part, it was decided to use AlexNet, ResNet, and GoogLeNet since those networks have fewer layers. This will reduce complexity and increase computational speed [20]. Figure 13.7 shows the structure of a GAN block. In this structure, the input is a noise vector that is sent to a generator. The task of the generator is to create vectors based on the noise vector. Then, fake images are sent to the discriminator, whose task is to find out the difference between the actual input and the input which is fed to it. The network learns in such a way that new images are created by the network based on the images it has been trained on [20]. Figure 13.7: Noise vector block output is fed to the generator network and then the separation of fake images is done by the fake images block whose output is then discriminated through the discriminator network which discriminates between the real images and fake images and gives the predicted labels as the output [20]. 13.2.2 Computer tomography (CT) using deep learning CT scan is known to be an important tool for understanding the severity of infection for COVID-19 patients. With the increase in the volume of patients, the results get further delayed. To provide COVID-19 patients with appropriate medical facilities, efficient and rapid detection of COVID-19 is a must. A study by Zheng et al. [21] introduces a supervised DL-based program that was developed to detect COVID-19. A network known as DeCovNet was proposed in this study. Three-dimensional (3D) lung mask and the CT volume created by a trained UNet were taken as an input by the DeCovNet model. The CT volumes were prepro- cessed before being fed to the two-dimensional (2D) UNet. The CT volume was com- bined with the volume of the mask (obtained by the trained UNet) to obtain a CT mask volume data. Data augmentation was performed to prevent overfitting. DeCov- Net was developed based on PyTorch. The network was trained for 100 epochs by using Adam optimizer with the learning rate of 1e-5. Different values for positive predictive value, accuracy, and negative predictive value can be obtained by vary- ing the probability threshold. In the study conducted by Zhenyu et al. [22] about 63 quantitative features were calculated to classify a CT scan as “severe” or “nonse- vere.” Chest CT images of 176 confirmed COVID-19 positive patients were used. The ratio between nonsevere and severe examples was approximately 11:5 throughout 242 Rutwik Patel, Ninad Mehendale the dataset. A random forest (RF) model consisting of 500 individual decision trees was used with k-fold cross-validation. The 63 calculated quantitative features were given to the RF model as input, and the output could be one of the two labels men- tioned earlier. A true negative rate of 0.745, an accuracy of 0.875, a true positive rate of 0.933, and 0.91 as the area under the receiving operating characteristic curve were obtained from the model. It was noticed that the better estimate for severity was given by the quantitative features extracted from the right lung than the fea- tures extracted from the left lung. The chest CT is fed to a segmentation block where based on the defect, each part of the lung is segregated based on various color for- mations as shown in Figure 13.8. Various lung lobes and segments have been pre- cisely segregated based on infected areas. Thus, by this segmenting technique, an individual can easily identify the lung comprising infections, and based on this technique the qualification block helps us to analyze various parameters involved in segmenting and analysis of the lung as a whole [23]. Pneumonia in COVID-19 pa- tients could be accurately evaluated using CCT combined with uAI Intelligent Assis- tant Analysis System. Clinical characteristics were examined using RT-PCR results of COVID-19 positive patients received from a hospital in a study conducted by Zhang et al. The uAI Intelligent Assistant Analysis System evaluated the CT scans. The generalized model revealed that the right lower section of the lung was mostly found to be the site of COVID-19 pneumonia [23]. Figure 13.8: The chest X-ray is fed as an input that is segmented to lung lobes and various other segments denoted by various colors as shown. With this, the defect is easily detected inside the lungs and thus the further qualification analysis and visualization results can be obtained [23]. The study conducted by Lin et al. [24] uses AI to distinguish between community- acquired pneumonia (CAP) and COVID-19 using CT scans. A dataset consisting of 4,356 3D volumetric chest CT exams was prepared. It consisted of 1,296 COVID-19 positive, 1,735 CAP positive, along with 1,325 normal CTs. A 3D DL framework called COVNet was designed by the authors to detect COVID-19. It was built using ResNet50. It can extract 3D global and 2D local representative features. It takes CT slices as input and produces features for the slices. A max-pooling operation is applied to the extracted 13 Deep learning on medical images to combat a pandemic 243 features, and the obtained map of features is sent to an entirely connected layer which has a certain activation function. The model was tested using a different testing data- set. Statistical analysis was conducted on the results to find values of sensitivity and specificity in the recognition of COVID-19 and CAP. Some of the drawbacks of this study were lack of transparency of DL models and lack of laboratory confirmation result- ing in being unable to select different kinds of viral pneumonia. It is seen in Figure 13.9 that with the help of chest X-ray images, the level of severity of a disease can be easily interpreted by visualization of the images showing two different structures and thus two different levels of severity. Based on the volume of the various parts that were processed, it was easily interpretable whether the condition of the following patient is severe or nonsevere. For example, in the nonsevere section, we could see that the volume of the mucus is more as compared to the volume of the severe part [22]. Thus, by processing the contours and by separating various parts of the CT through edge detection, we can hence predict whether the condition is severe or nonsevere. Mei et al. [25] state that although CT scans help detect if patients have COVID-19 or not, they are not entirely accurate in making that assessment. Hence, Mei et al. [25] com- bined the utility of CT scans with the detailed medical history of a patient to detect if that patient is COVID-10 positive or not. The medical history of the patient included respiratory disorders and a history of exposure to COVID-19 positive patients. The pro- posed model wasted test on 279 cases, on which it attained an area under the curve of 0.92 and it scored 84.3% on sensitivity, whereas a senior thoracic radiologist scored 74.6% on sensitivity on the same test cases. In Figure 13.10, a combination of ResNet and COVNet is shown. Both these net- works share weights between their layers. The output from both the networks is merged with the help of a max-pooling layer. With the help of both the networks as a whole, the input chest X-ray image is classified as COVID, CAP, and nonpneumonia labels [24]. Figure 13.9: With the help of chest X-ray images, the level of severity of a disease can be easily interpreted by visualization of the images showing two different structures and thus two different levels of severity [22]. The study conducted by Wu et al. proposed a fast screening method based on DL to detect COVID-19 pneumonia based on a multiview model of CT images of patients. 244 Rutwik Patel, Ninad Mehendale Figure 13.10: X-ray images are fed as input, one to ResNet50, and another one to COVnet. The weights of the layers are shared between the two networks before the max-pooling layer [24]. Using multiview helped in increasing the accuracy compared to a single view. For the inputs to the ResNet50 model, the slices selected had the largest number of pix- els in the segmented lung area from each of the sagittal, axial and coronal views. The lung region in each axial CT slice was segmented first, followed by coronal and sagittal CT slices [26]. In Figure 13.11, CT slices that form a segment of the lungs are fed to residual blocks of the network in coronal, axial, and sagittal views. The resid- ual block is responsible for calculating the sum considering the three branches mentioned earlier. This is then fed into the dense layer that carries out calculations to analyze risk performance for COVID-19 [26]. Hu et al. [27] proposed a supervised DL method that is used to detect and classify COVID-19 infections using only weakly labeled CT scans. The proposed system can be deployed at a large scale as it reduces the need for labeled input CT scans. Coronal Res block CT Slice Axial Dense COVID-19 + layer Risk values Segment Sagittal Lung Region Figure 13.11: CT slices forming a segment of the lungs are fed to the residual block as coronal, axial, and sagittal views. The residual block sums up all the three branches and feeds it to the dense layer that helps in analyzing the COVID-19 risk performance [26]. 13 Deep learning on medical images to combat a pandemic 245 13.2.3 Classification of magnetic resonance imaging (MRI) Since an MRI is different than X-rays, the risks due to ionizing radiations are elimi- nated. Thus, it has a significant advantage over X-rays and CT scans making it a preferred diagnostic test. A survey conducted by Lee et al. [28] provides an elaborate overview of DL- based image processing of MRI scans and its analysis. The article focuses on various techniques present in AI such as ML and DL. It explains different techniques present in DL such as RNN, CNN, and their basic structures. Of the two approaches such as supervised and unsupervised CNN learning, unsupervised learning is shown to be beneficial as the probability of overfitting is minimized. A new technique known as fast regional CNN is used to detect disease and label the region affected. This tech- nique evaluates the radiological image only in certain regions. Other applications such as AI-assisted detection, analysis, reading assistance, and automatic dictation have been mentioned. The study conducted by Liu et al. (2018) evaluates the feasibility of DL ap- AU: Please clarify whether this is ref- proaches for MRI-based attenuation correction (AC) of positron emission tomogra- erence [42] or [44] phy (PET). The molecular sensitivity and specificity of PET are combined with the in the sentence “The study con- soft tissue contrast of MRI in the simultaneous PET/MRI. This method has certain ducted by. . .” disadvantages like the dearth of bone estimation from normal MRI-based AC (MRAC). The Convolutional Autoencoder architecture is explained briefly. It is mentioned that the training data is comprised of unprocessed MRI images as an input, followed by a brief explanation of the training procedure. To capture the entire head of the subject, the postcontrast T1-weighted image was used as an input for the model. The accuracy of deep MRAC tissue labeling was evaluated. Further, prospective PET evaluation was conducted with five additional subjects, none of whom had tumors in the brain. Işın et al. [29] conducted a comprehensive review of DL-based tools that can perform image segmentation on MRI scans of the brain to detect brain tumors. This study focuses on recent developments in the field of brain tumor detection using DL. This study presents an elaborate comparison between different types of image seg- mentation used in brain tumor detection as well as the various challenges faced by each one. A DL-based solution for the reconfiguration of compressed sensing MRI (CS-MRI) has been done using the de-aliasing GAN-based model (DAGAN) [30]. For various clinical applications, the faster acquisition is essential, which can be provided using MRI CS-MRI. This can provide better image quality by reducing the effects of contrast washout and by reducing the motion artifacts, plus it reduces the scanning cost. An end-to-end network to decrease the aliasing artifacts can be obtained by using a re- finement learning method designed in DAGAN, to stabilize a U-Net-based generator. In order to be suitable for real-time processing, each image is reconstructed in 5 ms. The diagnosis of various regions affected in a chest X-ray can be visualized based on one X-ray image as shown in Figure 13.12. Different visualizations of the affected part 246 Rutwik Patel, Ninad Mehendale of the X-ray image have been mentioned. These include two nodules, pleural effusion and pneumothorax, consolidation and pleural effusion, two interstitial opacity, cardi- omegaly, and two pleural effusion and consolidation which can be detected from X-rays [28]. Figure 13.12: The diagnosis of various regions affected in a chest X-ray image can be visualized based on one X-ray image of the regions affected. Two nodules, pleural effusion and pneumothorax, consolidation and pleural effusion, two interstitial opacity, cardiomegaly and two pleural effusion and consolidation are the various visualizations mentioned [28]. 13.2.4 Ultrasound image analysis using deep learning As mentioned earlier, ultrasound has a significant advantage over other medical im- aging techniques due to the use of ultrasound instead of radiation or magnetic fields. Several AI methods have been designed to help with the diagnosis of ultra- sound. Liang et al. [31] presented a study using CNN networks for early detection of thyroid cancer and breast cancer. The models were trained with sonography images that had fully exposed breasts and the axillary areas, by making sure that the patient had both of their hands lifted completely above their head. A statistical method known as the Shapiro–Wilk test is employed to analyze the data. Accuracy of pre- diction was very promising and could evolve the ultrasonic-based CAD system into an efficient analysis. Guo et al. [32] proposed a multiview multistage frame- work for CAD for liver tumors which use the technique of contrast-enhanced ultra- sound. The dataset that was used was obtained from 93 patients. Statistical features were obtained from the ROI and these were used in conjunction with deep canonical correlation analysis algorithm. This framework has high accuracy while being compu- tationally easy [32]. Chi et al. [33] manifested a system for examining nodules in the thyroid in ultra- sound images. A DL method was employed for the extraction of features from 13 Deep learning on medical images to combat a pandemic 247 preprocessed thyroid ultrasound images. The GoogLeNet model was used for fine- tuning to accomplish superior features. These extracted features were classified as be- nign and malignant using loss-sensitive RF classification. The specificity was 93.90% for the images in an open-access database, the sensitivity was 99.10% and the classi- fication accuracy attained was 98.29%. Local health database-based images had an accuracy of 96.34% for classification, specificity was 99% and sensitivity was 86% [33]. Through a comprehensive comparison of most recent CNN models [34], the VGG19 model has been selected to detect COVID 19, pneumonia, and other diseases from X- rays, CT scans, and ultrasounds of the patients . They have then made some modifica- tions to the VGG 19 model so that it works well in weakly labeled and demanding COVID 19 datasets. They have discovered that the final model attained the highest ac- curacy when it was applied to ultrasound images. 13.3 Comparison of different techniques reported in the literature Techniques Medical Performance Dataset details References image type CheXNet Chest X-ray Difference between F , frontal view [] scores of radiologists and X-ray images CheXNet performances was .(% CI ., .). Mask RCNN Chest X-ray – – [] DeCovNet CT At a threshold of . From December , [] accuracy obtained was , to February , . with a positive , CT scans predictive value = . were collected that are and negative predictive stored in the dataset value = .. Random forest CT The model showed a true Chest CT images of (Zhenyu AU: The reference classifier negative rate of . along patients were used who et al., “[Zhenyu et al., with a true positive rate of had COVID-. Among ) 2019]” is cited in .. It had . as the the patients, were the text but is not area under the receiving female and were listed in the refer- ences list. Please operating characteristic male, with the median either delete the curve and an accuracy of age being . years. in-text citation or .. provide the full reference details. 248 Rutwik Patel, Ninad Mehendale (continued ) Techniques Medical Performance Dataset details References image type AU: Please clarify Convolutional MRI Deep MRAC gives a mean The images used in the (Liu et al., whether the cita- auto-encoder dice coefficient of . dataset comprised ) tion “Liu et al. architecture ± . for air and . patients who went for a 2018” refers to Ref. ± . for soft tissue. nonenhanced CT scan [42] or [44] in table present and high-spatial- under Section 13.4. resolution T-weighted contrast material- enhanced D MRI, on the same day, for the evaluation of an acute stroke. The median age of the patients was years. Deep de- MRI – Out of the random [] aliasing T-weighted MRI generative datasets, % were used adversarial for testing and % were network-based used for training the model model. These contained valid D images. The model was independently tested on different datasets that had , D images. CNN Ultrasound The accuracy of the model The images comprising [] on the training set of the the dataset consisted of different groups was % breast nodules and (nontreated images thyroid nodules, which arranged according to the also comprised various diseases) and % malignant nodules and ( segmented images). benign nodules. Deep canonical Ultrasound The designed model had a The dataset used was [] correlation sensitivity of . ± .%, obtained from analysis the classification accuracy patients. algorithm of . ± .%, a specificity of . ± .%, false positive rate of . ± .%, Youden’s index of . ± .%, and a false negative rate of . ± .%. 13 Deep learning on medical images to combat a pandemic 249 13.4 Designing your own deep learning model based on medical images Based on our elaborate and comprehensive comparison and analysis of the most ef- fective DL-based tools that can be applied to medical images, we have prepared the following table. The table lists which DL model is most effective for a specific type of medical images. Anybody who wants to make a new model from scratch can refer to the following table to take inspiration from the most effective models of a certain type of medical image. Medical image Techniques References type Chest X-ray CheXNet [] CT DeCovNet [] MRI Deep de-aliasing generative adversarial network- [] based model Ultrasound Deep canonical correlation analysis algorithm [] 13.5 Proposed models We are proposing two models that can detect COVID-19 infections in CT scans and X-rays of patients. The first model is a mixture of various existing models like VGG16, DenseNet-161, and ResNet-18, which is optimized for chest X-rays. An incoming chest X-ray is first classified into three categories such as pneumonia, TB, and normal, using VGG16 as the classification model, yielding an accuracy of 95.9%. Then Dense- Net-161 is used to separate pneumonia-classified images of the previous classification into two categories such as normal pneumonia and COVID-19, with an accuracy of 98.9%. Finally, the images classified as COVID-19 are further subcategorized into three categories such as mild, medium, and severe using ResNet-18, which yields an accuracy of 76%. About 2,271 chest X-ray images were sourced from the Clinico Diagnostic Lab based in Mumbai, India. The X-ray images had the dimensions of 895 × 1,024 × 3 pixels, they were grayscaled and their dimensions were reduced to 64 × 64 × 1. For VGG16, the dataset consisted of 1,071 images of all three types (nor- mal, pneumonia, and TB). Out of this dataset, 70% of images were retained for training the model, and the remaining were reserved for testing. For VGG16, the training dataset consisted of 500 pneumonia, 303 TB, and 388 normal images. The dataset for the DenseNet-161 model consisted of 500 pneumonia images and 500 250 Rutwik Patel, Ninad Mehendale AU: Please provide Figure 13.13: This is the block diagram of our first proposed system. In the first stage, the incoming missing citation for X-ray scan is categorized into three categories such as normal, pneumonia, and tuberculosis, using Figures 13.13 and VGG16. In the second stage, the X-rays labeled as pneumonia in the previous stage are divided into 13.14 in text. two groups such as normal pneumonia and COVID-19, using DenseNet-161. Finally, using ResNet18, the COVID-19 labeled X-rays are classified according to the severity of infection into three groups: mild, medium, and severe. COVID-19 images, out of which 70% of images of each category were used in train- ing the model. For ResNet-18, 80 images of each subclass (mild, medium, and se- vere) were used to train the model. The second model we propose detects COVID-19 infections by analyzing the CT scans of various patients. The proposed model is based on CNNs; it is fed with RGB input images of the dimensions 128 × 128 × 3. An input image is passed through a series of convolutional layers, max-pooling layers, and flattened layers until it reaches the single final neuron which declares if the image is COVID-19 positive or negative. About 349 CT scans of 216 COVID-19 positive patients and 463 CT scans of normal peo- ple were used as the dataset for this model. About 80% of images were used for test- ing, and the remaining images were equally divided into testing and validation set. Out of the 73 images in the testing set, 28 were correctly identified as COVID-19 posi- tive, 32 were correctly identified as COVID-19 negative, 6 COVID-19 positive images were misclassified as COVID-19 negative and 7 COVID-19 negative images were mis- classified as COVID-19 positive. 13.6 Applications of deep learning in medical image diagnosis AI has resulted in exponential growth in the medical field. ML can analyze organs [35]. However, methods to be designed to estimate the accuracy and efficacy of this idea [36] have mentioned in their work how AI is commencing to augment medical imaging techniques and advancements. It was stated that AI -based solutions would challenge the conventional workflow of medical imaging from image acquisi- tion to image analysis. ML can identify patterns and detect a particular disease. AI can be developed in fields such as MRI, molecular imaging, and CT. Diagnostic ra- diographers should have the ability to rectify the errors if produced or detect an Figure 13.14: This is the architecture of the neural network designed in our second proposed model. It inputs a CT scan of dimensions 128 × 128 × 3 pixels. The CT scan is filtered through two convolutional layers of the dimensions 126 × 126 × 32 and 124 × 124 × 32. It is followed by passing the CT scan through a max-pooling layer of dimensions 62 × 62 × 32. In the next stage, the image is passed through two convolutional layers having dimensions 60 × 60 × 32 and 58 × 58 × 32. Then it is passed through 13 Deep learning on medical images to combat a pandemic the final pooling layer of dimensions 29 × 29 × 32. In the last stage, the image is flattened using a flatten layer, passed through a dropout layer and delivered to the single neuron in the final layer that categorizes the image into COVID-19 positive and COVID-19 negative. The flatten layer and dropout layer have sizes of 26,912 neurons and 256 neurons, respectively. 251 252 Rutwik Patel, Ninad Mehendale incorrect application of the algorithm. Radiographers who have bioinformatic skills can help in improving clinical decision-making. A review study focused on the early diagnosis of breast cancer was conducted by Bharati et al. [37], in which prominent ANNs that used mammography datasets for the detection of breast cancer were ex- plained in great detail. It also elaborated on the various pros and cons of using deep belief neural networks, multilayer neural networks, and so on. Razzak et al. [38] present a descriptive comparison of different types of architecture of DL-based tools that have been developed to classify and detect various kinds of disorders, from an- alyzing different medical images. It discusses the future trends of using DL on medi- cal images and what specific fields on medical diagnosis will be the most impacted by it. The study also remarks on the efforts that must be taken to label and prepare medical images and other medical data before any DL-based tools can be trained on it. It also briefly touches upon the issue of privacy of patient data, as a large amount of data are required by any DL model to train itself, and a lot of patients would have to agree to release their medical images so a dataset can be compiled. Ghoshal et al. [39] remark that almost all the recent DL-based tools created to detect COVID-19 and other ailments primarily focus on improving the accuracy of the classification of these disorders. But they have not taken into account the factor of quantified uncer- tainty while making the predictions. By understanding how confident the model is in making a prediction is crucial to the adoption of DL-based tools into all the clin- ics worldwide. Hence, in this work [39], a method to quantify the uncertainty in the prediction of a DL-based model was proposed, by utilizing drop-weight-based Bayesian CNN. 13.7 Future scope It is known that AI is a rapidly growing field that is finding its use in several areas. It has automated several tasks with high accuracy, and we can expect it to do the same for others as well. AI has found significant use in the field of medicine. It can be used for a plethora of varied tasks such as identifying infections in medical im- ages, finding the severity of infection, and the presence of abnormalities indicating the initial stage of severe conditions. As the number of scans increases, so does the work for radiologists. AI has shown tremendous growth in medical imaging in recent years. Alexander et al. [40] show that AI can aid a radiologist as an assistant. AI-assisted medical imaging has seen heavy investment recently from large corporations and well-established com- panies. A survey was conducted to understand the changes in radiologists’ work- loads and their subspecialties, years of clinical practice, and others. This survey was limited to SERMO-verified radiologists and it was conducted as a computer- assisted web interview. Despite the growth of AI, many radiologists are skeptical to 13 Deep learning on medical images to combat a pandemic 253 use it as they fear its diagnostic abilities might falter in the case of complex patients. A concept of three different horizons of market development for AI was introduced and explained. In horizon 1, companies would offer AI applications but not at a scale. There would be a rapid investment in AI-assisted medical imaging with some companies emerging as industry leaders in this field by the second horizon. Like any other well-established industry, AI-assisted medical imaging would stabilize by the third horizon and it would witness a constant stream of advancements every year. Dilsizian et al. [41] describe the applications of AI in medicine such as the effi- cient search on medical databases, finding similar patients with the help of their medical history is explained. It is mentioned that with the advent of DL-based solu- tions into cardiac imaging in the future, we will be able to perform multiple candi- dates’ diagnoses in a short time. The next section explains the current applications of AI in medical imaging like cardiac imaging. Although AI has progressed a lot in medical imaging, failures would be a barrier for many medical systems and hospi- tals trying to integrate AI in health applications. 13.8 Summary In this chapter, we have understood the importance and advancement of AI dealing with pandemics effectively. We have gone through the most commonly used medi- cal techniques such as MRI scans and CT scans. We understood the history of neural networks for medical image processing. This was followed by a laconic overview of the varied techniques used in literature for the diagnosis of each of these medical images. We compared the different techniques used and understood the pros and cons of each method. We then proposed two models of our own to detect COVID-19 infections: one based on a mixture of existing models and another that was based on CNNs. We finally discuss further use of DL in the field of medical image analysis and conclude with a brief discussion predicting the future trends in this field. References [1] Subrato, B., Podder, P., Mondal, M.R.H. Hybrid deep learning for detecting lung diseases from X-ray images, Informatics in Medicine Unlocked, 2020, 20, 100391. https://doi.org/ 10.1016/j.imu.2020.100391 [2] Filippo, P., Codari, M., Sardanelli, F. Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine, European Radiology Experimental, 2018, 2(1), 35. Available at: https://link.springer.com/article/ 10.1186/s41747-018-0061-6 254 Rutwik Patel, Ninad Mehendale [3] Mondal, M.R.H., Bharati, S., Podder, P., Podder, P. Data analytics for novel coronavirus disease, Informatics in Medicine Unlocked, 2020, 20, 100374. https://doi.org/10.1016/ j.imu.2020.100374 [4] Khanam, F., Nowrin, I., Mondal, M.R.H. Data visualization and analyzation of COVID-19, JSRR, Apr 2020, 26(3), 42–52. [5] Prajoy, P., Bharati, S., Mondal, M.R.H., Kose, U. Application of machine learning for the diagnosis of COVID-19. In: Kose, U., Gupta, D., De Albuquerque, V.H.C., Khanna, A., Eds, Data Science for COVID-19. to be published by Elsevier, In Press. [6] Hariharan, R., Sudhakar, P., Venkataramani, R., Thiruvenkadam, S., Annangi, P., Babu, N., Vaidya, V. Understanding the mechanisms of deep transfer learning for medical images. Springer: 188–196, 2016 Available at: https://link.springer.com/chapter/10.1007/ 978-3-319-46976-8_20 [7] Bhandary, A., Prabhu, G.A., Rajinikanth, V., Thanaraj, K.P., Satapathy, S.C., Robbins, D.E., Shasky, C., Zhang, Y.-D., Tavares, J.M.R.S., Raj, N.S.M. Deep-learning framework to detect lung abnormality–A study with chest X-Ray and lung CT scan images, Pattern Recognition Letters, 2020, 129, 271–278. Available at: https://www.sciencedirect.com/science/article/ pii/S0167865519303277 [8] Shi, F., Wang, J., Shi, J., Wu, Z., Wang, Q., Tang, Z., He, K., Shi, Y., Shen, D., Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19. IEEE reviews in biomedical engineering, 2020. Available at: https://ieeexplore.ieee.org/ab stract/document/9069255 [9] Emad, O., Yassine, I.A., Fahmy, A.S. Automatic localization of the left ventricle in cardiac MRI images using deep learning, IEEE, 2015, 683–686. Available at: https://ieeexplore.ieee.org/ abstract/document/7318454 [10] Skourt, B.A., Hassani, A.E., Majda, A. Lung CT image segmentation using deep neural networks, Procedia Computer Science, 2018, 127, 109–113. Available at: https://www.science direct.com/science/article/pii/S1877050918301157 [11] Christian, S., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A., Going deeper with convolutions: 1-9, 2015. Available at: https://www. cv-foundation.org/openaccess/content_cvpr_2015/html/Szegedy_Going_Deeper_With_ 2015_CVPR_paper.html [12] Karen, S., Zisserman, A., Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. Available at: https://arxiv.org/abs/1409.1556 [13] Kaiming, H., Zhang, X., Ren, S., Sun, J., Deep residual learning for image recognition 770–778, 2016. Available at: http://openaccess.thecvf.com/content_cvpr_2016/html/He_ Deep_Residual_Learning_CVPR_2016_paper.html [14] Forrest, I., Moskewicz, M., Karayev, S., Girshick, R., Darrell, T., Keutzer, K., DenseNet: Implementing efficient convnet descriptor pyramids. arXiv preprint arXiv:1404.1869, 2014. Available at: https://arxiv.org/abs/1404.1869 [15] Pranav, R., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., Ding, D., Bagul, A., Langlotz, C., Shpanskaya, K., Lungren, M.P., Ng, A.Y., CheXNet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225, 2017. Available at: https:// arxiv.org/abs/1711.05225 [16] Panwar, H., Gupta, P.K., Siddiqui, M.K., Morales-Menendez, R., Singh, V. Application of deep learning for fast detection of COVID-19 in X-rays using nCOVnet, Chaos, Solitons, and Fractals, 2020, 109944. Available at: https://www.sciencedirect.com/science/article/pii/ S096007792030343X 13 Deep learning on medical images to combat a pandemic 255 [17] Jaiswal, A.K., Tiwari, P., Kumar, S., Gupta, D., Khanna, A., Rodrigues, J.J.P.C. Identifying pneumonia in chest X-rays: a deep learning approach, Measurement, 2019, 145, 511–518. Available at: https://www.sciencedirect.com/science/article/pii/S0263224119305202 [18] Vikash, C., Singh, S.K., Khamparia, A., Gupta, D., Tiwari, P., Moreira, C., Damaševičius, R., De Albuquerque, V.H.C. A novel transfer learning based approach for pneumonia detection in chest X-ray images, Applied Science, 2020, 10(Issue: 2). Doi: https://doi.org/10.3390/ app10020559. [19] Heo, S.-J., Kim, Y., Yun, S., Lim, S.-S., Kim, J., Nam, C.-M., Park, E.-C., Jung, I., Yoon, J.-H. Deep learning algorithms with demographic information help to detect tuberculosis in chest radiographs in annual workers’ health examination data, International Journal of Environmental Research and Public Health, 2019, 16(2), 250. Available at: https://www.mdpi.com/1660-4601/ 16/2/250 [20] Loey, M., Smarandache, F., Khalifa, M., Eldeen, N. Within the lack of chest COVID-19 X-ray dataset: a novel detection model based on GAN and deep transfer learning, Symmetry, 2020, 12(4), 651. Available at: https://www.mdpi.com/2073-8994/12/4/651 [21] Zheng, C., Deng, X., Fu, Q., Zhou, Q., Feng, J., Ma, H., Liu, W., Wang, X., Deep learning-based detection for COVID-19 from chest CT using weak label. medRxiv, 2020. Available at: https:// www.medrxiv.org/content/medrxiv/early/2020/03/26/2020.03.12.20027185.full.pdf [22] Zhenyu, T., Zhao, W., Xie, X., Zhong, Z., Shi, F., Liu, J., Shen, D. Severity assessment of coronavirus disease 2019 (COVID-19) using quantitative features from chest CT images. arXiv preprint arXiv:2003.11988, 2020. Available at: https://arxiv.org/abs/2003.11988 [23] Zhang, H.T., Zhang, J.S., Zhang, H.H., Nan, Y.D., Zhao, Y., Fu, E.Q., Xie, Y.H., Liu, W., Li, W.P., Zhang, H.J., Jiang, H. Automated detection and quantification of COVID-19 pneumonia: CT imaging analysis by a deep learning-based software, European Journal of Nuclear Medicine and Molecular Imaging, 2020, 1–8. Available at: https://link.springer.com/article/10.1007/ s00259-020-04953-1 [24] Lin, L., Qin, L., Xu, Z., Yin, Y., Wang, X., Kong, B., Bai, J., Lu, Y., Fang, Z., Song, Q., Cao, K., Liu, D., Wang, G., Xu, Q., Fang, X., Zhang, S., Xia, J., Xia, J. Using artificial intelligence to detect COVID-19 and community-acquired pneumonia based on pulmonary CT: evaluation of the diagnostic accuracy, Radiology, 2020, 296(2). Available at: https://covid19.elsevierpure. com/en/publications/using-artificial-intelligence-to-detect-covid-19-and-community-ac [25] Mei, X., Lee, H., Diao, K., et al. Artificial intelligence–enabled rapid diagnosis of patients with COVID-19, Nature Medicine, 2020, 26, 1224–1228. Available at https://doi.org/10.1038/ s41591-020-0931-3 [26] Wu, X., Hui, H., Niu, M., Li, L., Wang, L., He, B., Yang, X., Li, L., Li, H., Tian, J., Zha, Y. Deep learning-based multi-view fusion model for screening 2019 novel coronavirus pneumonia: a multicentre study, European Journal of Radiology, 2020, 109041. Available at: https://www. sciencedirect.com/science/article/pii/S0720048X20302308 [27] Hu, S., et al. Weakly supervised deep learning for COVID-19 infection detection and classification from CT images, IEEE Access, 2020, 8, 118869–118883. 10.1109/ ACCESS.2020.3005510. Available at: https://ieeexplore.ieee.org/abstract/document/ 9127422 [28] Lee, J.-G., Jun, S., Cho, Y.-W., Lee, H., Kim, G.B., Seo, J.B., Kim, N. Deep learning in medical imaging: general overview, Korean Journal of Radiology, 2017, 18(4), 570–584. Available at: https://synapse.koreamed.org/DOIx.php?id=10.3348/kjr.2017.18.4.570 [29] Işın, A., Direkoğlu, C., Şah, M. Review of MRI-based brain tumor image segmentation using deep learning methods, Procedia Computer Science, 2016, 102, 317–324. Available at: https://www.sciencedirect.com/science/article/pii/S187705091632587X 256 Rutwik Patel, Ninad Mehendale [30] Yang, G., Yu, S., Dong, H., Slabaugh, G., Dragotti, P.L., Ye, X., Liu, F., Arridge, S., Keegan, J., Guo, Y., Firmin, D. DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction, IEEE Transactions on Medical Imaging, 2017, 37(6), 1310–1321. Available at: https://ieeexplore.ieee.org/abstract/document/8233175 [31] Liang, X., Yu, J., Liao, J., Chen, Z. Convolutional neural network for breast and thyroid nodules diagnosis in ultrasound imaging, BioMed Research International, 2020, 2020. Available at: https://www.hindawi.com/journals/bmri/2020/1763803/abs/ [32] Guo, L.H., Wang, D., Qian, Y.Y., Zheng, X., Zhao, C.K., Li, X.L., Bo, X.W., Yue, W.W., Zhang, Q., Shi, J., Xu, H.X. A two-stage multi-view learning framework based computer-aided diagnosis of liver tumors with contrast enhanced ultrasound images, Clinical Hemorheology and Microcirculation, 2018, 69(3), 343–354. Available at: https://content.iospress.com/articles/ clinical-hemorheology-and-microcirculation/ch170275 [33] Chi, J., Walia, E., Babyn, P., Wang, J., Groot, G., Eramian, M. Thyroid nodule classification in ultrasound images by fine-tuning deep convolutional neural network, Journal of Digital Imaging, 2017, 30(4), 477–486. Available at: https://link.springer.com/article/10.1007/ s10278-017-9997-y [34] Horry, M.J. et al. “COVID-19 detection through transfer learning using multimodal imaging data,” in IEEE Access, vol. 8, pp. 149808–149824, 2020. Avaliable at doi: 10.1109/ ACCESS.2020.3016780. [35] Kortesniemi, M., Tsapaki, V., Trianni, A., Russo, P., Maas, A., Källman, H.E., Brambilla, M., Damilakis, J. The European Federation of Organisations for Medical Physics (EFOMP) White Paper: Big data and deep learning in medical imaging and in relation to medical physics profession. Elseiver, 2018. Available at: https://www.sciencedirect.com/science/article/abs/ pii/S1120179718313152. [36] Lewis, S.J., Gandomkar, Z., Brennan, P.C. Artificial Intelligence in medical imaging practice: looking to the future, Journal of Medical Radiation Sciences, 2019, 66(4), 292–295. Available at: https://onlinelibrary.wiley.com/doi/full/10.1002/jmrs.369 [37] Bharati, S., Podder, P., Mondal, M.R.H. Artificial neural network based breast cancer screening: a comprehensive review, International Journal of Computer Information Systems and Industrial Management Applications, MIR Labs, USA, 2020, 12, 125–137, May 2020. [38] Razzak, M.I., Naz, S., Zaib, A. Deep learning for medical image processing: Overview, challenges and the future. In: Classification in BioApps. Cham, Springer, 2018, 323–350, 2018. Available at: https://link.springer.com/chapter/10.1007/978-3-319-65981-7_12 [39] Ghoshal, B., Tucker, A. “Estimating uncertainty and interpretability in deep learning for coronavirus (COVID-19) detection.” arXiv preprint arXiv:2003.10769 (2020). Available at: https://arxiv.org/abs/2003.10769 [40] Alexander, A., Jiang, A., Ferreira, C., Zurkiya, D. An intelligent future for medical imaging: a market outlook on artificial intelligence for medical imaging, Journal of the American College of Radiology, 2020, 17(1), 165–170. Available at: https://www.sciencedirect.com/science/ article/pii/S1546144019308634 [41] Dilsizian, S.E., Siegel, E.L. Artificial intelligence in medicine and cardiac imaging: harnessing big data and advanced computing to provide personalized medical diagnosis and treatment, Current Cardiology Reports, 2014, 16(1), 441. Available at: https://link.springer.com/article/ 10.1007%2Fs11886-013-0441-8 [42] Liu, F., Jang, H., Kijowski, R., Bradshaw, T., McMillan, A.B. Deep learning MR imaging-based attenuation correction for PET/MR imaging, Radiology, 2018, 286(2), 676–684. Available at: https://pubs.rsna.org/doi/abs/10.1148/radiol.2017170700 13 Deep learning on medical images to combat a pandemic 257 [43] Subrato, B., Podder, P., Mondal, M.R.H., Podder, P., Kose, U. A review on epidemiology, genomic characteristics, spread and treatments of COVID-19. In: Kose, U., Gupta, D., De AU: Reference Albuquerque, V.H.C., Khanna, A., Eds, Data Science for COVID-19. to be published by Elsevier, “[43]” is listed in the references list In Press. but is not cited in [44] Liu, J., Pan, Y., Li, M., Chen, Z., Tang, L., Lu, C., Wang, J. Applications of deep learning to MRI the text. Please ei- images: a survey, Big Data Mining and Analytics, 2018, 1(1), 1–18. Available at: https:// ther cite the refer- ieeexplore.ieee.org/abstract/document/8268732 ence or remove it from the references list.
Enter the password to open this PDF file:
-
-
-
-
-
-
-
-
-
-
-
-