Access the full text.
Sign up today, get DeepDyve free for 14 days.
Saeid Jafarzadeh Ghoushchi is currently an 772 assistant professor and full-time academic member of Faculty of Industrial 773
Y Ganjkhanlou (2014)
10.30492/IJCCE.2014.10750Iran. J. Chem. Chem. Eng., 33
Antonio Brunetti, D. Buongiorno, Gianpaolo Trotta, Vitoantonio Bevilacqua (2018)
Computer vision and deep learning techniques for pedestrian detection and tracking: A surveyNeurocomputing, 300
J. Dolz, Christian Desrosiers, Ismail Ayed (2016)
3D fully convolutional networks for subcortical segmentation in MRI: A large-scale studyNeuroImage, 170
Alex Torres, Hao Yan, Armin Aboutalebi, Arun Das, Lide Duan, P. Rad (2017)
Patient Facial Emotion Recognition and Sentiment Analysis Using Secure Cloud with Hardware Acceleration
Gaoxiang Chen, Qun Li, Fuqian Shi, I. Rekik, Zhifang Pan (2020)
RFDCR: Automated brain lesion segmentation using cascaded random forests with dense conditional random fieldsNeuroImage, 211
N. Ramli, M. Hussain, B. Jan, Bawadi Abdullah (2017)
ONLINE COMPOSITION PREDICTION OF A DEBUTANIZER COLUMN USING ARTIFICIAL NEURAL NETWORKIranian Journal of Chemistry & Chemical Engineering-international English Edition, 36
Ganjkhanlou Yadolah, Bayandori Abdolmajid, Hosseini Samanesadat, Nazari Tayebe, Gazmeh Akbar, Badraghi Jalil (2014)
Application of Image Analysis in the Characterization of Electrospun NanofibersIranian Journal of Chemistry & Chemical Engineering-international English Edition, 33
NM Ramli (2017)
10.30492/IJCCE.2017.26704Iran. J. Chem. Chem. Eng., 36
(D. Marcheggiani, O. Täckström, A. Esuli, and F. Sebastiani, “Hierarchical multi-label conditional random fields for aspect-oriented opinion mining,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8416 LNCS, pp. 273–285, 2014, doi: 10.1007/978-3-319-06028-6_23.)
D. Marcheggiani, O. Täckström, A. Esuli, and F. Sebastiani, “Hierarchical multi-label conditional random fields for aspect-oriented opinion mining,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8416 LNCS, pp. 273–285, 2014, doi: 10.1007/978-3-319-06028-6_23.D. Marcheggiani, O. Täckström, A. Esuli, and F. Sebastiani, “Hierarchical multi-label conditional random fields for aspect-oriented opinion mining,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8416 LNCS, pp. 273–285, 2014, doi: 10.1007/978-3-319-06028-6_23., D. Marcheggiani, O. Täckström, A. Esuli, and F. Sebastiani, “Hierarchical multi-label conditional random fields for aspect-oriented opinion mining,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8416 LNCS, pp. 273–285, 2014, doi: 10.1007/978-3-319-06028-6_23.
R. Ranjbarzadeh, S. Saadi (2020)
Automated liver and tumor segmentation based on concave and convex points using fuzzy c-means and mean shift clusteringMeasurement, 150
H Torabi Dashti (2012)
10.30492/IJCCE.2012.5998Iran. J. Chem. Chem. Eng., 31
Dingwen Zhang, Guohai Huang, Qiang Zhang, Jungong Han, Junwei Han, Yizhou Yu (2021)
Cross-modality deep feature learning for brain tumor segmentationPattern Recognit., 110
Lei Geng, Siqi Zhang, Jun Tong, Zhitao Xiao (2019)
Lung segmentation method with dilated convolution based on VGG-16 networkComputer Assisted Surgery, 24
Bjoern Menze, A. Jakab, S. Bauer, Jayashree Kalpathy-Cramer, K. Farahani, J. Kirby, Y. Burren, N. Porz, J. Slotboom, R. Wiest, L. Lanczi, E. Gerstner, M. Weber, T. Arbel, B. Avants, N. Ayache, Patricia Buendia, D. Collins, Nicolas Cordier, Jason Corso, A. Criminisi, T. Das, H. Delingette, Çağatay Demiralp, C. Durst, M. Dojat, Senan Doyle, Joana Festa, F. Forbes, Ezequiel Geremia, B. Glocker, P. Golland, Xiaotao Guo, A. Hamamci, K. Iftekharuddin, R. Jena, N. John, E. Konukoglu, D. Lashkari, J. Mariz, Raphael Meier, Sérgio Pereira, Doina Precup, S. Price, Tammy Riklin-Raviv, Syed Reza, Michael Ryan, Duygu Sarikaya, L. Schwartz, Hoo-Chang Shin, J. Shotton, Carlos Silva, N. Sousa, N. Subbanna, G. Székely, Thomas Taylor, O. Thomas, N. Tustison, Gözde Ünal, F. Vasseur, M. Wintermark, Dong Ye, Liang Zhao, Binsheng Zhao, D. Zikic, M. Prastawa, M. Reyes, K. Leemput (2015)
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)IEEE Transactions on Medical Imaging, 34
Xiaomei Zhao, Yihong Wu, Guidong Song, Zhenye Li, Yazhuo Zhang, Yong Fan (2017)
A deep learning model integrating FCNNs and CRFs for brain tumor segmentationMedical Image Analysis, 43
Chenhong Zhou, Changxing Ding, Xinchao Wang, Zhentai Lu, D. Tao (2019)
One-Pass Multi-Task Networks With Cross-Task Guided Attention for Brain Tumor SegmentationIEEE Transactions on Image Processing, 29
Torabi Hesam, Masoudi-Nejad Ali, Zarei Fatemeh (2012)
Finding Exact and Solo LTR-Retrotransposons in Biological Sequences Using SVMIranian Journal of Chemistry & Chemical Engineering-international English Edition, 31
A CNN-Based Defect Inspection 716
Hongdou Yao, Xuejie Zhang, Xiaobing Zhou, Shengyan Liu (2019)
Parallel Structure Deep Neural Network Using CNN and RNN with an Attention Mechanism for Breast Cancer Histology Image ClassificationCancers, 11
Edmund Haynes (2021)
The SurveyDyslexia in Higher Education
A Multi-Modality Fusion Network Based on 693
Bo Wang, Y. Lei, S. Tian, Tonghe Wang, Yingzi Liu, P. Patel, A. Jani, H. Mao, W. Curran, Tian Liu, Xiaofeng Yang (2019)
Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic MRI prostate segmentationMedical Physics, 46
Zhenyu Tang, Sahar Ahmad, P. Yap, D. Shen (2018)
Multi-Atlas Segmentation of MR Tumor Brain Images Using Low-Rank Based Image RecoveryIEEE Transactions on Medical Imaging, 37
A. Khosravanian, M. Rahmanimanesh, P. Keshavarzi, S. Mozaffari (2020)
Fast level set method for glioma brain tumor segmentation based on Superpixel fuzzy clustering and lattice Boltzmann methodComputer methods and programs in biomedicine, 198
(2009)
MSc degree in manufacturing system engineering and the PhD degree in 775 industrial engineering from the National University of Malaysia
Kai Hu, Qinghai Gan, Yuan Zhang, Shuhua Deng, F. Xiao, Wei Huang, Chunhong Cao, Xieping Gao (2019)
Brain Tumor Segmentation Using Multi-Cascaded Convolutional Neural Networks and Conditional Random FieldIEEE Access, 7
(2020)
Random FieldsEncyclopedia of Continuum Mechanics
Diego Marcheggiani, Oscar Täckström, Andrea Esuli, F. Sebastiani (2014)
Hierarchical Multi-label Conditional Random Fields for Aspect-Oriented Opinion Mining
Md. Islam, Md. Islam, A. Asraf (2020)
A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray imagesInformatics in Medicine Unlocked, 20
Muhammad Ali, B. Raza, A. Shahid, Fahad Mahmood, Muhammad Yousuf, A. Dar, Uzair Iqbal (2020)
Enhancing breast pectoral muscle segmentation performance by using skip connections in fully convolutional networkInternational Journal of Imaging Systems and Technology, 30
(V. Mitra et al., “Robust Features in Deep-Learning-Based Speech Recognition,” in New Era for Robust Speech Recognition, Springer International Publishing, 2017, pp. 187–217.)
V. Mitra et al., “Robust Features in Deep-Learning-Based Speech Recognition,” in New Era for Robust Speech Recognition, Springer International Publishing, 2017, pp. 187–217.V. Mitra et al., “Robust Features in Deep-Learning-Based Speech Recognition,” in New Era for Robust Speech Recognition, Springer International Publishing, 2017, pp. 187–217., V. Mitra et al., “Robust Features in Deep-Learning-Based Speech Recognition,” in New Era for Robust Speech Recognition, Springer International Publishing, 2017, pp. 187–217.
J. Ahwood (1931)
CLASSIFICATIONMedicine, 10
Junping Zhong, Zhigang Liu, Zhiwei Han, Ye Han, Wenxuan Zhang (2019)
A CNN-Based Defect Inspection Method for Catenary Split Pins in High-Speed RailwayIEEE Transactions on Instrumentation and Measurement, 68
Erwin Meir, C. Hadjipanayis, A. Norden, H. Shu, P. Wen, J. Olson (2010)
Exciting New Advances in Neuro‐Oncology: The Avenue to a Cure for Malignant GliomaCA: A Cancer Journal for Clinicians, 60
A. Mahmood, Bennamoun, S. An, Ferdous Sohel, F. Boussaid, R. Hovey, G. Kendrick, Robert Fisher (2017)
Deep Learning for Coral Classification
S Bakas (2017)
10.7937/K9/TCIA.2017.GJQ7R0EFCancer Imag. Arch.
Azari Ahmad, Shariaty Mojtaba, A. Mahmoud (2012)
Short-term and Medium-term Gas Demand Load Forecasting by Neural NetworksIranian Journal of Chemistry & Chemical Engineering-international English Edition, 31
S. Iqbal, M. Khan, T. Saba, Zahid Mehmood, Nadeem Javaid, A. Rehman, Rashid Abbasi (2019)
Deep learning model integrating features and novel classifiers fusion for brain tumor segmentationMicroscopy Research and Technique, 82
Mohammad Havaei, Axel Davy, David Warde-Farley, A. Biard, Aaron Courville, Yoshua Bengio, C. Pal, Pierre-Marc Jodoin, H. Larochelle (2015)
Brain tumor segmentation with Deep Neural NetworksMedical Image Analysis, 35
Ahmad Salehi, Preety Baglat, Gaurav Gupta (2020)
Review on machine and deep learning models for the detection and prediction of CoronavirusMaterials Today. Proceedings, 33
(2017)
Era for Robust Speech Recognition
Y. Pourasad, R. Ranjbarzadeh, A. Mardani (2021)
A New Algorithm for Digital Image Encryption Based on Chaos TheoryEntropy, 23
Yanhui Tu, Jun Du, Lei Sun, Feng Ma, Haikun Wang, Jingdong Chen, Chin-Hui Lee (2019)
An iterative mask estimation approach to deep learning based multi-channel speech recognitionSpeech Commun., 106
Ali Jalalifar, H. Soliman, M. Ruschin, A. Sahgal, A. Sadeghi-Naini (2020)
A Brain Tumor Segmentation Framework Based on Outlier Detection Using One-Class Support Vector Machine2020 42nd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC)
P. Coupé, Boris Mansencal, Michaël Clément, Rémi Giraud, B. Senneville, T. Thong, V. Lepetit, J. Manjón (2019)
AssemblyNet: A large ensemble of CNNs for 3D whole brain MRI segmentationNeuroImage, 219
M. Antonelli, M. Cardoso, E. Johnston, M. Appayya, B. Presles, M. Modat, S. Punwani, S. Ourselin (2019)
GAS: A genetic atlas selection strategy in multi‐atlas segmentation frameworkMedical Image Analysis, 52
ABCNN: Attention-Based Convolutional 730
Wenpeng Yin, Hinrich Schütze, Bing Xiang, Bowen Zhou (2015)
ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence PairsTransactions of the Association for Computational Linguistics, 4
(Q. V. Le and T. Mikolov, Distributed representations of sentences and documents. 2014, doi: 10.1145/2740908.2742760.)
Q. V. Le and T. Mikolov, Distributed representations of sentences and documents. 2014, doi: 10.1145/2740908.2742760.Q. V. Le and T. Mikolov, Distributed representations of sentences and documents. 2014, doi: 10.1145/2740908.2742760., Q. V. Le and T. Mikolov, Distributed representations of sentences and documents. 2014, doi: 10.1145/2740908.2742760.
SegNet: A Deep Convolutional Encoder-664
Seyyed Partovi, S. Sadeghnejad (2018)
Reservoir Rock Characterization Using Wavelet Transform and Fractal DimensionIranian Journal of Chemistry & Chemical Engineering-international English Edition, 37
One-Pass Multi-Task Networks with 634
R. Ranjbarzadeh, Saeid Ghoushchi, Malika Bendechache, A. Amirabadi, M. Rahman, Soroush Saadi, A. Aghamohammadi, Mersedeh Forooshani (2021)
Lung Infection Segmentation for COVID-19 Pneumonia Based on a Cascade Convolutional Network from CT ImagesBioMed Research International, 2021
Guofa Li, Yifan Yang, Xingda Qu (2020)
Deep Learning Approaches on Pedestrian Detection in Hazy WeatherIEEE Transactions on Industrial Electronics, 67
Junwen Chen, Zhigang Liu, Hongrui Wang, A. Núñez, Zhiwei Han (2018)
Automatic Defect Detection of Fasteners on the Catenary Support Device Using Deep Convolutional Neural NetworkIEEE Transactions on Instrumentation and Measurement, 67
S Bakas (2017)
10.7937/K9/TCIA.2017.KLXWJJ1QCancer Imag. Arch.
Xiaoqiang Chen, Ling Zheng, Chong Zhao, Qicong Wang, Maozhen Li (2020)
RRGCCAN: Re-Ranking via Graph Convolution Channel Attention Network for Person Re-IdentificationIEEE Access, 8
(A. Mahmood et al., “Deep Learning for Coral Classification,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 383–401.)
A. Mahmood et al., “Deep Learning for Coral Classification,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 383–401.A. Mahmood et al., “Deep Learning for Coral Classification,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 383–401., A. Mahmood et al., “Deep Learning for Coral Classification,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 383–401.
N. Karimi, Ramin Kondrood, Tohid Alizadeh (2017)
An intelligent system for quality measurement of Golden Bleached raisins using two comparative machine learning algorithmsMeasurement, 107
Farzad Husain, B. Dellen, C. Torras (2017)
Scene understanding using deep learning
Baiying Lei, Shan Huang, Hang Li, Ran Li, Cheng Bian, Y. Chou, Jin Qin, Peng Zhou, Xuehao Gong, Jie-Zhi Cheng (2020)
Self-co-attention neural network for anatomy segmentation in whole breast ultrasoundMedical image analysis, 64
(2017)
Segmentation labels and radiomic features for the pre-operative scans of the 578 TCGA-LGG collection
Nitish Srivastava, Geoffrey Hinton, A. Krizhevsky, Ilya Sutskever, R. Salakhutdinov (2014)
Dropout: a simple way to prevent neural networks from overfittingJ. Mach. Learn. Res., 15
Bei Fang, Ying Li, Haokui Zhang, J. Chan (2019)
Hyperspectral Images Classification Based on Dense Convolutional Networks with Spectral-Wise Attention MechanismRemote. Sens., 11
RRGCCAN: Re-Ranking via Graph 700
(T. Zhou, S. Ruan, Y. Guo, and S. Canu, “A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation,” in Proceedings - International Symposium on Biomedical Imaging, Apr. 2020, vol. 2020-April, pp. 377–380, doi: 10.1109/ISBI45749.2020.9098392.)
T. Zhou, S. Ruan, Y. Guo, and S. Canu, “A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation,” in Proceedings - International Symposium on Biomedical Imaging, Apr. 2020, vol. 2020-April, pp. 377–380, doi: 10.1109/ISBI45749.2020.9098392.T. Zhou, S. Ruan, Y. Guo, and S. Canu, “A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation,” in Proceedings - International Symposium on Biomedical Imaging, Apr. 2020, vol. 2020-April, pp. 377–380, doi: 10.1109/ISBI45749.2020.9098392., T. Zhou, S. Ruan, Y. Guo, and S. Canu, “A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation,” in Proceedings - International Symposium on Biomedical Imaging, Apr. 2020, vol. 2020-April, pp. 377–380, doi: 10.1109/ISBI45749.2020.9098392.
E. Kamari, A. Hajizadeh, M. Kamali (2020)
Experimental Investigation and Estimation of Light Hydrocarbons Gas-Liquid Equilibrium Ratio in Gas Condensate Reservoirs through Artificial Neural NetworksIranian Journal of Chemistry & Chemical Engineering-international English Edition, 39
Lei Geng, Jia Wang, Zhitao Xiao, Jun Tong, Fang Zhang, Jun Wu (2019)
Encoder-decoder with dense dilated spatial pyramid pooling for prostate MR images segmentationComputer Assisted Surgery, 24
Quoc Le, Tomas Mikolov (2014)
Distributed Representations of Sentences and Documents
Muhammad Siddiqi, Rahman Ali, A. Khan, Young-Tack Park, Sungyoung Lee (2015)
Human Facial Expression Recognition Using Stepwise Linear Discriminant Analysis and Hidden Conditional Random FieldsIEEE Transactions on Image Processing, 24
A Azari (2012)
10.30492/IJCCE.2012.5923Iran. J. Chem. Chem. Eng., 31
(A. D. Torres, H. Yan, A. H. Aboutalebi, A. Das, L. Duan, and P. Rad, “Patient facial emotion recognition and sentiment analysis using secure cloud with hardware acceleration,” in Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications, Elsevier, 2018, pp. 61–89.)
A. D. Torres, H. Yan, A. H. Aboutalebi, A. Das, L. Duan, and P. Rad, “Patient facial emotion recognition and sentiment analysis using secure cloud with hardware acceleration,” in Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications, Elsevier, 2018, pp. 61–89.A. D. Torres, H. Yan, A. H. Aboutalebi, A. Das, L. Duan, and P. Rad, “Patient facial emotion recognition and sentiment analysis using secure cloud with hardware acceleration,” in Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications, Elsevier, 2018, pp. 61–89., A. D. Torres, H. Yan, A. H. Aboutalebi, A. Das, L. Duan, and P. Rad, “Patient facial emotion recognition and sentiment analysis using secure cloud with hardware acceleration,” in Computational Intelligence for Multimedia Big Data on the Cloud with Engineering Applications, Elsevier, 2018, pp. 61–89.
Yoshua Bengio (2012)
Practical Recommendations for Gradient-Based Training of Deep Architectures
(A. Jalalifar, H. Soliman, M. Ruschin, A. Sahgal, A. Sadeghi-Naini, A brain tumor segmentation framework based on outlier detection using one-class support vector machine,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Jul. 2020, vol. 2020-July, pp. 1067–1070, doi: 10.1109/EMBC44109.2020.9176263.)
A. Jalalifar, H. Soliman, M. Ruschin, A. Sahgal, A. Sadeghi-Naini, A brain tumor segmentation framework based on outlier detection using one-class support vector machine,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Jul. 2020, vol. 2020-July, pp. 1067–1070, doi: 10.1109/EMBC44109.2020.9176263.A. Jalalifar, H. Soliman, M. Ruschin, A. Sahgal, A. Sadeghi-Naini, A brain tumor segmentation framework based on outlier detection using one-class support vector machine,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Jul. 2020, vol. 2020-July, pp. 1067–1070, doi: 10.1109/EMBC44109.2020.9176263., A. Jalalifar, H. Soliman, M. Ruschin, A. Sahgal, A. Sadeghi-Naini, A brain tumor segmentation framework based on outlier detection using one-class support vector machine,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Jul. 2020, vol. 2020-July, pp. 1067–1070, doi: 10.1109/EMBC44109.2020.9176263.
Vijay Badrinarayanan, Alex Kendall, R. Cipolla (2015)
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image SegmentationIEEE Transactions on Pattern Analysis and Machine Intelligence, 39
N. Wahab, Asifullah Khan, Yeon Lee (2017)
Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detectionComputers in biology and medicine, 85
V. Mitra, H. Franco, R. Stern, Julien Hout, L. Ferrer, M. Graciarena, Wen Wang, D. Vergyri, A. Alwan, J. Hansen (2017)
Robust Features in Deep-Learning-Based Speech Recognition
S. Bakas, H. Akbari, Aristeidis Sotiras, M. Bilello, Martin Rozycki, J. Kirby, J. Freymann, K. Farahani, C. Davatzikos (2017)
Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic featuresScientific Data, 4
Chunwei Tian, Yong Xu, Zuoyong Li, W. Zuo, Lunke Fei, Hong Liu (2020)
Attention-guided CNN for image denoisingNeural networks : the official journal of the International Neural Network Society, 124
His current research interests include fuzzy mathematics, neural networks, and
Zongwei Zhou, M. Siddiquee, Nima Tajbakhsh, Jianming Liang (2019)
UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image SegmentationIEEE Transactions on Medical Imaging, 39
B. Kavitha, D. Thambavani (2020)
Artificial Neural Network Optimization of Adsorption Parameters for Cr(VI), Ni(II) and Cu(II) Ions Removal from Aqueous Solutions by Riverbed SandIranian Journal of Chemistry & Chemical Engineering-international English Edition, 39
Tongxue Zhou, S. Ruan, Yu Guo, S. Canu (2020)
A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Segmentation2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI)
Dingwen Zhang, Guohai Huang, Qiang Zhang, Jungong Han, Junwei Han, Yizhou Wang, Yizhou Yu (2020)
Exploring Task Structure for Brain Tumor Segmentation From Multi-Modality MR ImagesIEEE Transactions on Image Processing, 29
SMA Partovi (2018)
10.30492/IJCCE.2018.27647Iran. J. Chem. Chem. Eng., 37
(F. Husain, B. Dellen, and C. Torras, “Scene Understanding Using Deep Learning,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 373–382.)
F. Husain, B. Dellen, and C. Torras, “Scene Understanding Using Deep Learning,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 373–382.F. Husain, B. Dellen, and C. Torras, “Scene Understanding Using Deep Learning,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 373–382., F. Husain, B. Dellen, and C. Torras, “Scene Understanding Using Deep Learning,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 373–382.
www.nature.com/scientificreports OPEN Brain tumor segmentation based on deep learning and an attention mechanism using MRI multi‑modalities brain images 1 2 3 Ramin Ranjbarzadeh , Abbas Bagherian Kasgari , Saeid Jafarzadeh Ghoushchi , 4 5* 6 Shokofeh Anari , Maryam Naseri & Malika Bendechache Brain tumor localization and segmentation from magnetic resonance imaging (MRI) are hard and important tasks for several applications in the field of medical analysis. As each brain imaging modality gives unique and key details related to each part of the tumor, many recent approaches used four modalities T1, T1c, T2, and FLAIR. Although many of them obtained a promising segmentation result on the BRATS 2018 dataset, they suffer from a complex structure that needs more time to train and test. So, in this paper, to obtain a flexible and effective brain tumor segmentation system, first, we propose a preprocessing approach to work only on a small part of the image rather than the whole part of the image. This method leads to a decrease in computing time and overcomes the overfitting problems in a Cascade Deep Learning model. In the second step, as we are dealing with a smaller part of brain images in each slice, a simple and efficient Cascade Convolutional Neural Network (C‑ConvNet/C‑CNN) is proposed. This C‑CNN model mines both local and global features in two different routes. Also, to improve the brain tumor segmentation accuracy compared with the state‑of‑the‑art models, a novel Distance‑Wise Attention (DWA) mechanism is introduced. The DWA mechanism considers the effect of the center location of the tumor and the brain inside the model. Comprehensive experiments are conducted on the BRATS 2018 dataset and show that the proposed model obtains competitive results: the proposed method achieves a mean whole tumor, enhancing tumor, and tumor core dice scores of 0.9203, 0.9113 and 0.8726 respectively. Other quantitative and qualitative assessments are presented and discussed. Brain tumors include the most threatening types of tumors around the world. Glioma, the most common pri- mary brain tumors, occurs due to the carcinogenesis of glial cells in the spinal cord and brain. Glioma is char- acterized by several histological and malignancy grades, and an average survival time of fewer than 14 months aer di ft agnosis for glioblastoma patients . Magnetic Resonance Imaging (MRI), a popular non-invasive strategy, produces a large and diverse number of tissue contrasts in each imaging modality and has been widely used by medical specialists to diagnose brain tumors . However, the manual segmentation and analysis of structural MRI images of brain tumors is an arduous and time-consuming task which, thus far, can only be accomplished 3,4 by professional n euroradiologists . Therefore, an automatic and robust brain tumor segmentation will have a significant impact on brain tumor diagnosis and treatment. Furthermore, it can also lead to timely diagnosis and treatment of neurological disorders such as Alzheimer’s disease (AD), schizophrenia, and dementia. An auto- matic technique for Lesion segmentation can support radiologists to deliver key information about the volume, localization, and shape of tumors (including enhancing tumor core regions and whole tumor regions) to make therapy progress more effective and meaningful. There are several differences between the tumor and its normal adjacent tissue (NAT) which hinder the effectiveness of segmentation in medical imaging analysis, e.g., size, bias field (undesirable artifact due to the improper image acquisition), location, and shape . Several models that try Department of Telecommunications Engineering, Faculty of Engineering, University of Guilan, Rasht, 2 3 Iran. Faculty of Management and Accounting, Allameh Tabataba‘i University, Tehran, Iran. Faculty of Industrial Engineering, Urmia University of Technology, Urmia, Iran. Department of Accounting, Economic and Financial Sciences, South Tehran Branch Islamic Azad University, Tehran, Iran. Department of Chemical Engineering, Faculty of Engineering, Golestan University, Aliabad Katoul, Iran. School of Computing, Faculty of Engineering and Computing, Dublin City University, Dublin, Ireland. email: [email protected] Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 1 Vol.:(0123456789) www.nature.com/scientificreports/ to find accurate and efficient boundary curves of brain tumors in medical images have been implemented in the literature. These models can be divided into three main categories: (1) Machine learning approaches address these problems by mainly using hand-crae ft d features (or pre-defined 6 9 features) – . As an initial step in this kind of segmentation, the key information is extracted from the input image using some feature extraction algorithm, and then a discriminative model is trained to recognize the tumor from normal tissues. The designed machine learning techniques generally employ hand-crae ft d fea- 10 11,12 3 tures with various classifiers, such as random forest , support vector machine (SVM) , fuzzy clustering . e Th designed methods and features extraction algorithms have to extract features, edge-related details, and other necessary information—which is time-consuming . Moreover, when boundaries between healthy tissues and tumors are fuzzy/vague, these methods demonstrate poorer performances. (2) Multi-atlas registration (MAS) algorithms are based on the registration and label fusion of multiple normal brain atlases to a new image m odality . Due to the difficulties in registering normal brain atlases and the need for a large number of atlases, these MAS algorithms have not been successfully dealing with applica- tions that require speed . (3) Deep learning methods extract crucial features automatically. These approaches have yielded out- 15,16 standing results in various application domains, e.g., pedestrian det ection , speech recognition and 17,18 19,20 understanding , and brain tumor s egmentation . Zhang et al. proposed a TSBTS network (task-structured brain tumor segmentation network) to mimic the physicians’ expertise by exploring both the task-modality structure and the task-task structure. The task-modality structure identifies the dissimilar tumor regions by weighing the dissimilar modality volume data since they reflect diverse pathological features, whereas the task-task structure represents the most distinct area with one part of the tumor and uses it to find another part in its vicinity. A learning method for representing useful features from the knowledge transition across dier ff ent modality data employed in . To facilitate the knowledge transition, they used a generative adversarial network (GAN) learning scheme to mine intrinsic patterns from each modality data. Zhou et al. introduced a One-pass Multi- Task Network (OM-Net) to overcome the problem of imbalanced data in medical brain volume. OM-Net uses shared and task-specific parameters to learn discriminative and joint features. OM-Net is optimized using both learning-based training and online training data transfer approaches. Furthermore, a cross-task guided attention (CGA) module is used to share prediction results between tasks. The extraction of both local and global contex- tual features simultaneously was proposed inside the Deep CNN structure by Havaei et al. . Their model uses a simple but efficient feature extraction method. An AssemblyNet model was proposed by Coupé et al. which uses the parliamentary decision-making concept for 3D whole-brain MRI segmentation. This parliamentary network is able to solve unseen problems, take complex decisions, and reach a relevant consensus. AssemblyNet employs a majority voting by sharing the knowledge among neighboring U-Nets. This network is able to overcome the problem of limited training data. Owing to the small size of tumors compared to the rest of the brain, brain imaging data are imbalanced. Due to this characterization, existing networks get to be biased towards the one class that is overrepresented, and training a deep model oen ft leads to low true positive rates. Additionally, existing deep learning approaches have complex structures—which makes them more time-consuming. To overcome the mentioned difficulties, in our work, a powerful pre-processing strategy to remove a huge amount of unimportant information has been used, which causes promising results even in the present deep learning models. Owing to this strategy, we do not use a complex deep learning model to define the location of the tumor and extract features that lead to a time-consuming process with a high fault rate. Furthermore, thanks to the reduction in the size of the region of interest, the preprocessing step in this strategy also decreases overt fi - ting problems. Besides, aer t ft he pre-processing step, a cascade CNN approach is employed to extract both local and global features in an effective way. In order to make our model robust to variation in size and location of the tumor, a new distance-wise attention mechanism is applied inside the CNN model. i Th s study is structured as follows. In Sect. 2.1, the pre-processing procedure including Z-Score normalization is described in detail for four MRI modalities. In Sect. 2.2, deep learning architecture is described. In Sect. 2.3.1, the distance-wise attention module is demonstrated. In Sect. 2.3.2, the architecture of the proposed Cascade Convolutional Neural Networks (C-ConvNet/C-CNN) is explained. The experiments, discussion, and conclud- ing remarks are in Sects. 3 and 4. Material and methods In this section, we will discuss the proposed method in detail. Pre‑processing. Unlike many other recent deep learning approaches which use the whole of the image, we only focus on a limited area of it to extract key features. By removing these unnecessary uninformative parts, the true negative results are dramatically decreased. Also, by applying such a strategy, we do not need to use a very deep convolutional model. Similar distributions. To improve the final segmentation accuracy, we use four brain modalities, namely T1, 26,27 FLAIR, T1C, and T2 . To enforce the MRI data more uniform and remove the effect of the anisotropic (espe- cially for the FLAIR modality), we conduct the Z-Score normalization for the used modalities. By applying this approach to a medical brain image, the output image has zero mean and unit variance . We implemented this Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 2 Vol:.(1234567890) www.nature.com/scientificreports/ Figure 1. Two sets of four MRI modalities and their corresponding Z-Score normalization. step by subtracting the mean and dividing by the standard deviation in only the brain region (not the back- ground). This step was implemented independently for each brain volume of every patient. Figure 1 shows some samples of the four input modalities and their corresponding normalization results. Tumor representation in each slice. In our investigation, we found that the size and the shape of the tumor in sequential slices increase or decrease steadily. The tumor emerges in the first slices with a small size at any pos- sible location of the image. Then, in the following slices, the tumor will remain in the same location inside the image, but it will have a bigger size. Next, aer ft reaching maximum size, the tumor size will start to decrease until it vanishes entirely. This is the core concept of our pre-processing method. These findings are indicated in Figs. 2 and 3. The main reason for using the mentioned four brain modalities is their unique characteristics for detecting some parts of the tumor. Moreover, to find a tumor, we need to find all three parts in each of the four modalities, then combine them to make a solid object. So, our first goal is to find one part of the tumor in each modality. Finding the expected area of the tumor. By looking deeper into Figs. 2 and 3, we notice emerging, vanishing, and big tumor sizes are encountered in different slices related to different patients. For instance, the biggest tumors are depicted in slices 80 and 74 for Figs. 2 and 3, respectively. Another important fact is that to the best of our knowledge no sharp difference can be observed in the size of continuous slices and tumor size can be varied slightly. During the investigation phase, we noticed that finding the location of the emerging and vanishing tumor is a hard and challenging task. But this is not true when we are looking for the biggest tumor inside the image. To detect the tumor area in each slice we follow four main steps: (1) read all modalities except the T1 image and compute the Z-Score normalized image, (2) binarize the obtained image with the thresholds 0.7, 0.7, and 0.9 for FLAIR, T2, and T1ce, respectively, (3) apply a morphological operator to remove some irrelevant areas, (4) multiply both binary images of FLAIR and T2 to create a new image and 5) combine the obtained areas from each image together. This procedure is demonstrated in Figs. 4 and 5 in details. As the observed tumor in FLAIR and T2 images is demonstrated with a higher intensity than other parts of the brain, the threshold value of binarization needs to be larger than the mean value (we selected 0.7). Moreover, the Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 3 Vol.:(0123456789) www.nature.com/scientificreports/ Figure 2. Illustration of the ground truth in 24 different slices in Brats18_2013_23_1. The red numbers indicate the number of the slice. Figure 3. Demonstration of the ground truth in 24 different slices in Brats18_TCIA02_377_1. The red numbers indicate the number of the slice. Different parts of tumor are illustrated with different colors. tumor is much brighter in T1ce than FLAIR and T2 images. Therefore, a bigger threshold value of binarization Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 4 Vol:.(1234567890) www.nature.com/scientificreports/ Figure 4. Demonstration of the process of finding a part of the tumor in each slice. The yellow color in the top left corner and the bottom indicates the slice number and sample ID, respectively. Also, the conditions for selecting the object are shown in yellow color. The red color is chosen for identifying the presented image. All binary objects inside the binarized T1ce image are bigger than the threshold criteria, so they were eliminated. Figure 5. Demonstration of the process of finding a part of the tumor in each slice. The yellow color in the top left corner and the bottom indicates the slice number and sample ID, respectively. Also, the conditions for selecting the object are shown in yellow color. The red color is chosen to identify the presented image. The detected object from T1ce is indicated by the blue text. needs to be selected (we selected 0.9). If a small threshold value is selected for binarization, several normal tis- sues will be identified as tumor objects. In the next step, as there are some tumor-like objects inside the obtained image, we need to discard them using some simple but precise rules. As shown in Figs. 4 and 5, to decide whether to select a binary object as a part of the tumor or not, extra constraints are applied to the binarized T1ce images: (1) object solidity bigger than 0.7, (2) object area bigger than 500 pixels, and (3) length of the major axis of the object needs to be bigger Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 5 Vol.:(0123456789) www.nature.com/scientificreports/ Figure 6. Pseudocode of the proposed algorithm for detecting the biggest tumor among all slices. than 35 pixels. Any object in the binarized T1ce image that does not pass these criteria is removed from the image (Fig. 4). The defined constraints (rules) are the same for all the binarized images and we do not need to be altered to obtain good result. Moreover, to overcome the problem of using MRI images with die ff rent sizes and thicknesses, the value for each constraint was selected based on a wide span. For instance, in the BRATS 2018 dataset, we defined the smallest object area value as 500 pixels. While using a wide span for selecting an object decreases accuracy, applying the other rules (solidity and major axis length) enables us to overcome that problem ee ff ctively. Ae ft r detecting all binary objects using morphological operators, we need to add them to each other to cre - ate a binary tumor image. But there is still another condition before adding the binarized T1ce to the obtained image from the binary dot product of the FLAIR and T2 images. We can only consider the effect of a binary object inside the T1ce images if it has an overlapping area bigger than 20 pixels with a binary object inside the image obtained from the binary dot product of FLAIR and T2 (Fig. 5). In the next step, we need to find the location of the big tumor inside the slices. To this end, we need to be sure that all detected objects are truly tumor objects. To overcome this issue, we track each tumor object in sequen- tial slices. It means if a tumor object is found in almost the same position with a small change in the size in the sequential slices, we can be sure that this object is a true tumor object. Ae ft r finding the true tumor object in a slice, we search in the same area inside all other slices to find the biggest object. This procedure is explained in Fig. 6 in details. Finally, using morphological operators this object can be enlarged to cover all possible missing tumor areas (we call this area the biggest expected area). By finding this object and its location, we can search only in this area to find the tumor and segment it in all slices (Fig. 7). Finally, based on the information explained in Sect. 2.1.2 and also Figs. 2 and 3, it is obvious that by moving to the first or last slice, the size of the tumor will be decreased. So, we can create a binary mask for all slices in which the size of the expected areas differs slightly from the expected slice to slice die ff rence. Deep learning architecture. In today’s artificial intelligence (AI) applications, the convolutional neural network (ConvNet/CNN) pipelines that are a class of deep feed-forward artificial neural networks exhibit a 28–32 tremendous breakthrough in medical image analysis and p rocessing . e Th structure of a CNN model was inspired by the biological organization of the visual cortex in the human brain which uses the local receptive field. This architecture is similar to that of the connectivity pattern of neurons. As the CNN model is not invariant to rotation and scale, it is a tremendous task to segment an object that can be moved in the image. One of the key concerns about using a CNN model in the field of medical imaging lies in the time of the evaluation, as many medical applications need prompt responses to minimize the process for additional analysis and treatment. The condition is more complicated when we are dealing with a volumet - ric medical image. So, by applying a 3D CNN model for detecting lesions using the traditional sliding window approaches, an acceptable result cannot be achieved. This is highly impractical when there are high-resolution Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 6 Vol:.(1234567890) www.nature.com/scientificreports/ Figure 7. Two examples of finding the tumor object (expected area) and its corresponding center location and applying morphological filters to enlarge the tumor regions. The first row indicates the ground-truth images. e s Th econd row demonstrates the tumor object. The third row shows the enlarged tumor objects obtained in the second row. The yellow color in the top left corner indicates the slice number. volumetric images, and a large number of 3D block samples need to be investigated. In all brain volumetric images, the location, size, orientation, and shape of the tumor are die ff rent from a patient to another and cause uncertainty in finding the potential region of the tumor. Also, it is more reasonable to only search a small part of the image rather than the whole image. To this end, in this work, we first identify the region of interest with a high probability of encountering the tumor and then apply the CNN model to this smaller region–thus reducing computational cost and increasing system efficacy. e m Th ajor drawback of convolutional neural network models (CNN) lies in the fuzzy segmentation outcomes and the spatial information reduction caused by the strides of convolutions and pooling o perations . To further improve the segmentation accuracy and efficiency, several advanced strategies have been applied to obtain bet - 21,25,33,34 35 37 38,39 ter segmentation r esults with approaches like dilated convolution/pooling – , skip c onnections , as well as additional analysis and new post-processing modules like Conditional Random Field (CRF) and Hidden 10,40,41 Conditional Random Field (HCRF) . Using the dilated convolution method causes a large receptive field to be used without applying the pooling layer to the aim of relieving the issue of information loss during the training phase. The skip connection has the capability of restoring the unchanged spatial resolution progressively with the integration of features and adding outputs from previous layers to the existing layer in the down-sampling step. Recently, the attention mechanism has been employed in the deep learning context that has shown excellent 42 43 performance for numerous computer vision tasks including instance segmentation , image-denoising , person 44 45,46 re-identification , image classification , etc. Proposed structure. In this study, a cascade CNN model has been proposed that combines both local and global information from across different MRI modalities. Also, a distance-wise attention mechanism is proposed to consider the effect of the brain tumor location in four input modalities. This distance-wise attention mechanism successfully applies the key location feature of the image to the fully-connected layer to overcome overtfi ting problems using many parallel convolutional layers to differentiate between classes like the self-co- Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 7 Vol.:(0123456789) www.nature.com/scientificreports/ Figure 8. An example depicting the whole brain and its corresponding binary mask for two modalities. The expected area is shown in the third column. The center of the binary mask and the expected area is shown by a red star. Figure 9. Illustration of parameter calculation in the Distance-Wise Attention (DAW) module. The blue and red pixels are the background and the brain, respectively. The expected area is represented by a yellow object. e size o Th f the image is 240 × 240. attention mechanism . Although many CNN-based networks have been employed for similar multi-modality tumor segmentation in prior studies, none of them uses a combination of an attention-based mechanism and an area-expected approach. Distance‑wise attention (DWA) module. By considering the effect of dissimilarity between the center of the tumor and the expected area, we can guess the probability of encountering each pixel in the investigating pro- cess. In other words, knowing the location of the center of the expected (see Fig. 8) leads to a better differentia- tion between pixels of the three tumor classes. e D Th WA module explores distance-wise dependencies in each slice of the four employed modalities for the H×W×N selection of useful features. Given an input feature channel set A ∈ R , A = {A ,A , ... ,A } , w here 1 2 N H×W A ∈ R indicates a channel. The variables N, H, and W, are the input channels, spatial height, and spatial th width, respectively. So, as it is shown in Fig. 9, the O centroid of the object is obtained on each channel map by Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 8 Vol:.(1234567890) www.nature.com/scientificreports/ Figure 10. Distance calculation based on the center of the expected area and the four input modalities mask. object y = y + c 0 (1) object x = x + c 0 W = max arg sum pixels == 1in each ow object (2) H = max arg sum pixels == 1in each column object y = location of the starting point inH object (3) x = location of the starting point inW 0 object where y and x represent the center of the white object, W and H indicate the width and height of the c c object object object, respectively. By calculating Eq. (1) for both the expected area (see Fig. 8c) and binarization of the input modality in each slide (see Fig. 8b), the distance-wise can be defined as 2 2 2 x − x + y − y ci cj ci cj (4) Distance i, j = Number of rows of the image where i and j represent the binarized input modality and expected region, respectively. To obtain the width W of the object in Eq. (2), we need to count the number of pixels in each row that have the value 1, and then object select the row with the maximum count. For calculating the height H , we do the same strategy but in verti- object cal. Figure 9 provides more details about computing parameters in the DAW module. As shown in Fig. 10, this process is done for all input modalities and the mean of them is fed to the output of the module for each slice. Cascade CNN model. e flo Th wchart of our cascade mode is depicted in Fig. 11. To capture as many rich tumor features as possible, we use four modalities, namely, fluid attenuated inversion recovery (Flair), T1-contrasted (T1C), T1-weighted (T1), T2-weighted (T2). Moreover, we add four corresponding Z-Score normalized images of the four input modalities to improve the dice score of segmentation results without adding more complicated layers to our structure. Due to the use of a powerful preprocessing step that eliminates about 80% of the insignificant information 10,22,32 of each input image, there is no need for a complex deep network such as . In other words, by selecting approximately 20% of the whole image (this percentage is the mean of the whole slices of a patient) for each input modality and corresponding Z-Score normalized image, there fewer pixels to investigate. Also, considering the effect of the center of the tumor to correct detection leads to improve the segmenta- tion result without using a deep CNN model. So, in this study, a cascade CNN model with eight input images is proposed which employs the DWA module at the end of the network to avoid overt fi ting. As demonstrated in Fig. 11, our CNN model includes two different routes which extract local and global features from the four input modalities and the corresponding Z-Score normalized images. The key goal of using the first route is detecting the pixels on the border of each tumor (the global feature), whereas the key goal of the second route is labelling each pixel inside the tumor (the local feature). In the first route, a 40 × 40 patch (red window) is selected from each input image to feed the network. It is worth noting that we extract only patches that Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 9 Vol.:(0123456789) www.nature.com/scientificreports/ Figure 11. Our implemented cascade structure. The green and red windows inside the input images represent the local and global patches, respectively. The DWA module is represented at the end of the structure before the FC layer. Figure 12. Our implemented cascade structure. The blue and yellow windows inside the input images represent the local and global patches, respectively. The red contour indicates the obtained expected area. have their centers located in the obtained expected area, as shown in Fig. 12. The presence of Z-Score normalized images improves the accuracy of the tumor border recognition. The number of convolutional layers for extracting the global feature is five. Unlike the first route, in the local feature extraction route, there are only two convolu - tion layers and they are both fed with eight 15 × 15 input patches (green window). The core building block of the proposed CNN structure is expressed as the convolutional layer. This layer can calculate the dot-product between 32,48,49 input data with arbitrary size and a set of learnable filters (masks), much like a traditional neural network . The size of the applied masks is always smaller than the dimensions of the input data in all kinds of CNNs. Regularly, the first convolution layers which are applied at the beginning of the CNN model play a significant 50,51 role in extracting low-level features such as luminance and texture d iscontinuity . The high-level features Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 10 Vol:.(1234567890) www.nature.com/scientificreports/ including tumor region masks are investigated in the deeper convolutional layers of the pipeline, while the mid- dle convolutional layers are utilized for investigating the mid-level features including edges, curves, and points. As demonstrated in the first row of Fig. 12, the center of each patch is located inside the red border, regardless of whether there is part of the window outside the red border or not. By doing this, we do not investigate insig- nificant areas (which do not include the tumor). This is more helpful and reasonable when we are encountering imbalanced data. So, samples of the lesion are being equalized to the normal tissue which avoids overt fi ting in the training step. Additionally, this approach is helpful when dealing with images of various sizes and thick- nesses as insignificant parts of the images are discarded before ae ff cting the recognition of the tumor algorithm. After each convolution layer, there is an activation layer that helps the network to learn complex patterns without changing the dimension of the input feature maps . In other words, in the case of an increased number of layers and to overcome the vanishing gradient problem in the training step, an activation function is applied 51,53 to each feature map to enhance the computational ee ff ctiveness by inducing sparsity . In this study, all negative values are changed to zero using the Non-Linearity (ReLU) activation function which acts as a linear function for positive and zero values. It means some nodes obtain null weights and become useless and do not learn anything. So, fewer neurons would be activated because of the limitations applied by this layer. In contrast to the convolution operation, the pooling layer which is regularly incorporated between two sequential convolutional layers has no parameters and summarizes the key information without losing any details in the sliding window (mask). Additionally, as the dimension of the feature maps (in both column and 32,49 row) is decreased in this layer, the training time will be smaller and mitigates o vert fi ting . By using the max- pooling method in this paper, the feature map is divided into a set of regions with no overlapping, then takes the maximum number inside each area. As in a CNN pipeline, the dimension of the receptive field does not cover the entire spatial dimension of the image in the last convolutional layer, the produced maps by the last convolutional layer related to only an area of the whole input image. Due to this characterization of the receptive field, to learn the non-linear combinations of the high-level features, one or more FC layers have to be used. It should be noticed that before employing the achieved feature maps in the fully connected layer, these two-dimensional feature maps need to be changed 54 55 into a one-dimensional matrix . Furthermore, to reduce the effect of the overt fi ting a dropout layer with a 7% dropout probability has been employed (before the FC layer). Unlike the convolutional layers, the fully connected layers are composed of independent more parameters, so they are harder to t rain . The last layer in the proposed pipeline for the classification task is the Softmax regres - sion (Multi-class Logistic Regression) layer that is used to distinguish one class from the others. This Multi-class Logistic regression can follow a probability distribution between the range [0,1] by normalizing an input value into a vector of values. This procedure demonstrates how likely the input data (image) belongs to a predefined 24,48 class. It should be mentioned that the sum of the output probability distribution is equal to o ne . In the proposed network, we employed the stochastic gradient descent approach as the cross-entropy loss function to overcome the class imbalance p roblem . This loss function calculates the discrepancy between the ground truth and the network’s predicted output. Also, in the output layer, four logistic units were utilized to investigate the probabilities of the given sample belonging to either of the four classes. The loss function can be formulated as follows: loss =− log i (5) d=1 where loss implies the loss for the i-th training sample. Also, U demonstrates the unnormalized score for the i p ground-truth class P. This score can be generated by considering the effect of the outputs of the former FC layer (multiplying) with the parameters of the corresponding logistic unit. To get a normalized score to determine the between-class variation in the range of 0 and 3, the denominator adds the predicted scores for all the logistic units Q. As only four output neurons have been used in this study, the value for Q is equal to four. In other words, each pixel can be categorized into one of four classes. Experiments Data and implementation details. In this study, training, validation, and testing of our pipeline have been accomplished on the BRATS 2018 dataset which includes the Multi-Modal MRI images and patient’s clini- cal data with various heterogeneous histological sub-regions, different degrees of aggressiveness, and variable prognosis. These Multi-Modal MR images have the dimensions of 240 × 240 × 150 and were clinically obtained using various magnetic field strengths, scanners, and different protocols from many institutions that are dis- similar to the Computed Tomography (CT) images. There are four MRI sequences for training, validation, and testing steps which include the Fluid Attenuated Inversion Recovery (FLAIR), highlights water locations (T2 or T2-weighted), T1 with gadolinium-enhancing contrast, and highlights fat locations (T1 or T1-weighted). This dataset includes 75 cases with LGG and 210 cases with HGG which we randomly divided into training data (80%), validation data (10%), and test data (10%). Also, labels of images were annotated by neuro-radiol- ogists with tumor labels (necrosis, edema, non-enhancing tumor, and enhancing tumor are represented by 1, 2, 3, and 4, respectively. Also, the zero value indicates a normal tissue). Label 3 is not used. The experimental outcomes are achieved for the proposed structure using MATLAB on Intel Core I7- 3.4 GHz, 32 GB RAM, 15 MB Cache, over CUDA 9.0, CuDNN 5.1, and GPU 1080Ti NVIDIA computer under a 64-bit operating system. We adopted the Adaptive Moment Estimation (Adam) for the training step, with a batch −5 −4 size 2, weight decay 1 0 , an initial learning rate 10 . We took in total 13 h to train and 7 s per volume to test. Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 11 Vol.:(0123456789) www.nature.com/scientificreports/ Dice score (mean) Sensitivity (mean) Method Enh Whole Core Enh Whole Core Two-route CNN 0.2531 0.2796 0.2143 0.2456 0.2569 0.2007 Global route CNN + Attention mechanism 0.3128 0.3410 0.3025 0.3343 0.2947 0.2896 Local route CNN + Attention mechanism 0.3412 0.3671 0.3625 0.3356 0.3819 0.3808 Two-route CNN + Attention mechanism 0.4136 0.3754 0.3988 0.3910 0.3951 0.3822 Global route CNN + Preprocessing 0.7868 0.7916 0.7867 0.7426 0.7965 0.7448 Local route CNN + Preprocessing 0.8602 0.8343 0.8516 0.8751 0.8569 0.8485 Two-route CNN + Preprocessing 0.8756 0.8550 0.8715 0.8941 0.9036 0.8512 Proposed method 0.9113 0.9203 0.8726 0.9217 0.9386 0.9712 Table 1. Evaluation results with different pipeline configurations on BRATS 2018 dataset. Evaluation measure. e eff Th ectiveness of the approach is assessed by metrics regarding the enhancing core (EC), tumor core (TC, including necrotic core plus non-enhancing core), and whole tumor (WT, including all classes of tumor structures). The Dice similarity coefficient (DSC) is employed as the evaluation metric to compute the overlap between the ground truth and the predictions. e exp Th erimental results were obtained using the three criteria, namely HAUSDORFF99, Dice similarity, and 23,58–60 Sensitivity . The Hausdorff score assesses the distance between the surface of the predicted regions and that of the ground-truth regions. Dice score is employed as the evaluation metric for computing the overlap between the ground truths and the predictions. Specificity (actual negative rate) is the measure of non-tumor pixels that have been calculated correctly. Sensitivity (Recall or True positive rate) is the measure of tumor pixels that have been correctly calculated. These three criteria can be formulated as: R ∩ R p a DICE R ,R = 2∗ p a (6) R + R p a Sensitivity = R ∩ R /(R ) (7) p a a where R , R , and R demonstrate the predicted tumor regions, actual labels, and actual non-tumor labels, p a n respectively. Experimental results. To have a clear understanding and for quantitative and qualitative comparison pur- 34 10 22 poses, we also implemented five other models (Multi-Cascaded , Cascaded random forests , Cross-modality , 21 23 Task Structure , and One-Pass Multi-Task ) to evaluate the tumor segmentation performance. Quantitative results of different kinds of our proposed structure are presented in Table 1. From Table 1, we can observe that the two-route CNN model without using a preprocessing approach is not able to segment the tumor area properly. Adding an attention mechanism to a two-route model without using the preprocessing method causes to gain better segmentation results in terms of all three criteria. Also, by add- ing the preprocessing approach, the Dice scores in three tumor regions observe a surge increase from 0.2531, 0.2796, and 0.2143 to 0.8756, 0.8550, and 0.8715 for End, Whole, and Core, respectively. Despite only having a one-route CNN model (local or Global features) and thanks to the use of the preprocessing approach, the CNN model consistently obtains improved segmentation performance in all tumor regions. Moreover, it is observed that the use of the preprocessing method is more influential than only using an attention mechanism. In other words, the proposed attention mechanism can be more helpful when we are dealing with a smaller part of the input image extracted by the preprocessing method. By comparing the ee ff ct of local and global features, it can be recognized that the local features are more ee ff ctive than global features. e Dice Th , Sensitivity, and HAUSDORFF99 values of all input images using all the structures are described in Table 2. For each index in Table 2, the highest Dice, Sensitivity, and the smallest HAUSDORFF99 values are high- lighted in bold. From Table 2, it is obvious that our strategy can achieve the highest Sensitivity values in Enh and Whole tumor areas and the highest value for the Core area was obtained b y . Also, there is a minimum difference 34 23 22 between the values of HAUSDORFF99 u sing and. In , there is a significant improvement in the Enh area for all three measures. A lso , achieves the worst results in the Whole and Core areas for HAUSDORFF99 measure. Notice that when using the proposed method, all criteria were improved in comparison to other mentioned approaches, but the sensitivity value in the Core area using is still higher. To our best knowledge, there are three reasons. First, the proposed strategy pays special attention to removing insignificant regions inside the four modalities before applying them to the CNN model. Second, our method uses both the local and global features with different numbers of convolutional layers which explores the richer context tumor segmentation. Third, by considering the effect of the dissimilarity between the center of the tumor and the expected area, the network can be biased to a proper output class. Additionally, compared to the state-of-the-art algorithms with 22 23 heavy networks, such a s and , our approach obtains more promising performance and decreases the running time by only using a simple CNN structure. Moreover, as shown in Table 3, the proposed method is faster at segmenting the tumor than other compared models. Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 12 Vol:.(1234567890) www.nature.com/scientificreports/ Dice score (mean) Sensitivity (mean) HAUSDORFF99 (mm) Method Enh Whole Core Enh Whole Core Enh Whole Core 0.7178 0.8824 0.7481 0.8684 0.7621 0.9947 2.80 4.48 7.07 Multi-Cascaded 0.75 0.86 0.79 0.83 0.91 0.86 – – – Cascaded random forests 0.903 0.791 0.836 0.919 0.846 0.835 4.998 3.992 6.369 Cross–modality 0.782 0.896 0.824 – – – 3.567 5.733 9.270 Task Structure 0.811 0.908 0.857 – – – 2.881 4.884 6.932 One-Pass Multi-Task Proposed method 0.9113 0.9203 0.8726 0.9217 0.9386 0.9712 1.669 1.427 2.408 Table 2. Comparison between the proposed method and other baseline approaches on BRATS 2018 dataset. Cascaded random One-Pass Multi- Proposed 34 10 22 21 23 Approach Multi-Cascaded forests Cross-modality Task Structure Task method Time 261 s 314 s 208 s 193 s 277 s 84 s Table 3. Comparison of execution time of different techniques applied on BRATS 2018 dataset for one subject patient. Figure 13. e r Th esults of brain tumor segmentation using the proposed strategy (the blue, green, and red colors are enhanced, core, and edema regions respectively). Figure 13 provides a visual demonstration of the good results achieved by our approach on the BRATS 2018 dataset. As shown, all regions have a mutual border with all of the other regions. Due to the difference between the value of tumor core and enhancing areas inside the T1C images (third column), the border between them can be easily distinguished with a high rate of accuracy without using other modalities. But it is not true when we are dealing with the border of a tumor core, edema areas, or enhanced edema areas. Due to these mentioned characteristics of each modality, we observe that there is no need for a very deep CNN model if we decrease the searching area. Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 13 Vol.:(0123456789) www.nature.com/scientificreports/ Figure 14. Comparing the results of brain tumor segmentation by applying DWA method to the proposed CNN structure. The blue, yellow, and red colors are edema, enhanced, and core regions respectively. Figure 15. Comparing the results of brain tumor segmentation using the proposed strategy with four state-of-art methods (the blue, yellow, and red colors are edema, enhanced, and core regions respectively). (A) Multi-Cascaded 34 10 22 23 , (B) Cascaded random forests , (C) Cross-modality , (D) One-Pass Multi-Task , and (E) Our method. Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 14 Vol:.(1234567890) www.nature.com/scientificreports/ Owing to the use of the DWA module, our model can mine more unique contextual information from the tumor and the brain which leads to a better segmentation result. Figure 14 shows the improved segmentation resulting from the application of the DWA module in the proposed method—particularly in the border of touching tumor areas. The comparison between the baseline and our model in Fig. 15 shows the effectiveness of the proposed method in the capability of distinction between all four regions. Figure 15(GT) indicates the ground truth corresponding to all four modalities in the same row. The Multi- Cascaded (Fig. 15A) and Cascaded random forests (Fig. 15B) approaches show satisfactory results in detecting the Edema area but cannot detect the small regions of Edema outside the main Edema body. The Cross-modality (Fig. 15C) and One-Pass Multi-Task (Fig. 15D) approaches gain promising results in detecting the tumor Core and Enhancing areas, especially in detecting tumor Core in outside border of the Enhancing area. It is illustrated that some separated Edema regions are stuck together in final segmentation using the Cross- modality method. As shown in Fig. 15(C), applying the Cross-modality structure reaches the minimum seg- mentation accuracy for detecting the Edema regions compared to others. This model under-segments the tumor Core areas and over-segments the Edema areas. The One-Pass Multi-Task approach shows a better core matching with the ground-truth compared to Fig. 15(A–C) but still has insufficient accuracy, especially in the Edema areas. Based on our evaluation, estimation of the three distinct regions of the brain tumor using an attention-based mechanism is an effective way to help specialists and doctors to evaluate the tumor stages which is of high inter - est in computer-aided diagnosis systems. Discussion and conclusions In this paper, we have developed a new brain tumor segmentation architecture that benefits from the characteri- zation of the four MRI modalities. It means that each modality has unique characteristics to help the network efficiently distinguish between classes. We have demonstrated that working only on a part of the brain image near the tumor tissue allows a CNN model (that is the most popular deep learning architecture) to reach performance close to human observers. Moreover, a simple but efficient cascade CNN model has been proposed to extract both local and global features in two different ways with different sizes of extraction patches. In our method, aer ext ft racting the tumor’s expected area using a powerful preprocessing approach, those patches are selected to feed the network that their center is located inside this area. This leads to reducing the computational time and capability to make predictions fast for classifying the clinical image as it removes a large number of insignificant pixels off the image in the preprocessing step. Comprehensive experiments have indicated the effectiveness of the Distance-Wise Attention mechanism in our algorithm as well as the remarkable capacity of our entire model when compared with the state-of-the-art approaches. Although the proposed approach’s outstanding results compared to the other recently published models, our algorithm has still limitations when encountering tumor volume of more than one-third of the whole of the brain. This is because of an increase in the size of the tumor’s expected area which leads to a decrease in the feature extraction performance. Received: 12 February 2021; Accepted: 7 May 2021 References 1. Van Meir, E. G. et al. Exciting new advances in neuro-oncology: the avenue to a cure for malignant glioma. CA. Cancer J. Clin. 60(3), 166–193. https:// doi. org/ 10. 3322/ caac. 20069 (2010). 2. Bakas, S. et al. Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features. Sci. Data 4(1), 1–13. https:// doi. org/ 10. 1038/ sdata. 2017. 117 (2017). 3. Khosravanian, A., Rahmanimanesh, M., Keshavarzi, P. & Mozaffari, S. Fast level set method for glioma brain tumor segmentation based on superpixel fuzzy clustering and lattice boltzmann method. Comput. Methods Programs Biomed. 198, 105809. https://do i. org/ 10. 1016/j. cmpb. 2020. 105809 (2020). 4. Tang, Z., Ahmad, S., Yap, P. T. & Shen, D. Multi-atlas segmentation of MR tumor brain images using low-rank based image recovery. IEEE Trans. Med. Imaging 37(10), 2224–2235. https:// doi. org/ 10. 1109/ TMI. 2018. 28242 43 (2018). 5. Bakas, S. Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. Cancer Imag. Arch. https:// doi. org/ 10. 7937/ K9/ TCIA. 2017. GJQ7R 0EF (2017). 6. Ramli, N. M., Hussain, M. A., Jan, B. M. & Abdullah, B. Online composition prediction of a debutanizer column using artificial neural network. Iran. J. Chem. Chem. Eng. 36(2), 153–174. https:// doi. org/ 10. 30492/ IJCCE. 2017. 26704 (2017). 7. Q. V. Le and T. Mikolov, Distributed representations of sentences and documents. 2014, doi: https:// doi. org/ 10. 1145/ 27409 08. 27427 60. 8. Kamari, E., Hajizadeh, A. A. & Kamali, M. R. Experimental investigation and estimation of light hydrocarbons gas-liquid equilib- rium ratio in gas condensate reservoirs through artificial neural networks. Iran. J. Chem. Chem. Eng. 39(6), 163–172. https:// doi. org/ 10. 30492/ ijcce. 2019. 36496 (2020). 9. Ganjkhanlou, Y. et al. Application of image analysis in the characterization of electrospun nanofibers. Iran. J. Chem. Chem. Eng. 33(2), 37–45. https:// doi. org/ 10. 30492/ IJCCE. 2014. 10750 (2014). 10. Chen, G., Li, Q., Shi, F., Rekik, I. & Pan, Z. RFDCR: Automated brain lesion segmentation using cascaded random forests with dense conditional random fields. Neuroimage 211, 116620. https:// doi. org/ 10. 1016/j. neuro image. 2020. 116620 (2020). 11. A. Jalalifar, H. Soliman, M. Ruschin, A. Sahgal, A. Sadeghi-Naini, A brain tumor segmentation framework based on outlier detec- tion using one-class support vector machine,” in Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology Society, EMBS, Jul. 2020, vol. 2020-July, pp. 1067–1070, doi: https://do i.o rg/10. 1109/ EMB C44109. 2020. 91762 12. Torabi Dashti, H., Masoudi-Nejad, A. & Zare, F. Finding exact and solo LTR-retrotransposons in biological sequences using SVM. Iran. J. Chem. Chem. Eng. 31(2), 111–116. https:// doi. org/ 10. 30492/ IJCCE. 2012. 5998 (2012). 13. Partovi, S. M. A. & Sadeghnejad, S. Reservoir rock characterization using wavelet transform and fractal dimension. Iran. J. Chem. Chem. Eng. 37(3), 223–233. https:// doi. org/ 10. 30492/ IJCCE. 2018. 27647 (2018). Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 15 Vol.:(0123456789) www.nature.com/scientificreports/ 14. Antonelli, M. et al. GAS: A genetic atlas selection strategy in multi-atlas segmentation framework. Med. Image Anal. 52, 97–108. https:// doi. org/ 10. 1016/j. media. 2018. 11. 007 (2019). 15. Li, G., Yang, Y. & Qu, X. Deep learning approaches on pedestrian detection in hazy weather. IEEE Trans. Ind. Electron. 67(10), 8889–8899. https:// doi. org/ 10. 1109/ TIE. 2019. 29452 95 (2020). 16. Brunetti, A., Buongiorno, D., Trotta, G. F. & Bevilacqua, V. Computer vision and deep learning techniques for pedestrian detection and tracking: A survey. Neurocomputing 300, 17–33. https:// doi. org/ 10. 1016/j. neucom. 2018. 01. 092 (2018). 17. Tu, Y. H. et al. An iterative mask estimation approach to deep learning based multi-channel speech recognition. Speech Commun. 106, 31–43. https:// doi. org/ 10. 1016/j. specom. 2018. 11. 005 (2019). 18. V . Mitra et al., “Robust Features in Deep-Learning-Based Speech Recognition,” in New Era for Robust Speech Recognition, Springer International Publishing, 2017, pp. 187–217. 19. Iqbal, S. et al. Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation. Microsc. Res. Tech. 82(8), 1302–1315. https:// doi. org/ 10. 1002/ jemt. 23281 (2019). 20. Zhao, X. et al. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med. Image Anal. 43, 98–111. https:// doi. org/ 10. 1016/j. media. 2017. 10. 002 (2018). 21. Zhang, D. et al. Exploring task structure for brain tumor segmentation from multi-modality MR images. IEEE Trans. IMAGE Process. https:// doi. org/ 10. 1109/ TIP. 2020. 30236 09 (2020). 22. Zhang, D. et al. Cross-modality deep feature learning for brain tumor segmentation. Pattern Recognit. https:// doi. org/ 10. 1016/j. patcog. 2020. 107562 (2020). 23. Zhou, C., Ding, C., Wang, X., Lu, Z. & Tao, D. One-pass multi-task networks with cross-task guided attention for brain tumor segmentation. IEEE Trans. Image Process. 29, 4516–4529. https:// doi. org/ 10. 1109/ TIP. 2020. 29735 10 (2020). 24. Havaei, M. et al. Brain tumor segmentation with deep neural networks. Med. Image Anal. 35, 18–31. https:// doi. org/ 10. 1016/j. media. 2016. 05. 004 (2017). 25. Coupé, P . et al. AssemblyNet: a large ensemble of CNNs for 3D whole brain MRI segmentation. Neuroimage 219, 117026. https:// doi. org/ 10. 1016/j. neuro image. 2020. 117026 (2020). 26. Bakas, S. Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection. Cancer Imag. Arch. https:// doi. org/ 10. 7937/ K9/ TCIA. 2017. KLXWJ J1Q (2017). 27. Menze, B. H. et al. The multimodal brain tumor image segmentation benchmark (BRATS). IEEE Trans. Med. Imag. 34(10), 1993–2024. https:// doi. org/ 10. 1109/ TMI. 2014. 23776 94 (2015). 28. Islam, M. Z., Islam, M. M. & Asraf, A. A combined deep CNN-LSTM network for the detection of novel coronavirus (COVID-19) using X-ray images. Informatics Med. Unlocked 20, 100412. https:// doi. org/ 10. 1016/j. imu. 2020. 100412 (2020). 29. WaleedSalehi, A., Baglat, P. & Gupta, G. Review on machine and deep learning models for the detection and prediction of coro- navirus. Mater. Today Proc. https:// doi. org/ 10. 1016/j. matpr. 2020. 06. 245 (2020). 30. Kavitha, B. & Sarala Thambavani, D. Artificial neural network optimization of adsorption parameters for Cr(VI), Ni(II) and Cu(II) ions removal from aqueous solutions by riverbed sand. Iran. J. Chem. Chem. Eng. 39(5), 203–223. https:// doi. org/ 10. 30492/ ijcce. 2020. 39785 (2020). 31. Azari, A., Shariaty-Niassar, M. & Alborzi, M. Short-term and Medium-term Gas demand load forecasting by neural networks. Iran. J. Chem. Chem. Eng. 31(4), 77–84. https:// doi. org/ 10. 30492/ IJCCE. 2012. 5923 (2012). 32. Ranjbarzadeh, R. et al. Lung infection segmentation for COVID-19 Pneumonia based on a cascade convolutional network from CT images. Biomed Res. Int. https:// doi. org/ 10. 1155/ 2021/ 55447 42 (2021). 33. Badrinarayanan, V., Kendall, A. & Cipolla, R. SegNet: a deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 39(12), 2481–2495. https:// doi. org/ 10. 1109/ TPAMI. 2016. 26446 15 (2017). 34. Hu, K. et al. Brain tumor segmentation using multi-cascaded convolutional neural networks and conditional random field. IEEE Access 7, 92615–92629. https:// doi. org/ 10. 1109/ ACCESS. 2019. 29274 33 (2019). 35. Geng, L. et al. Encoder-decoder with dense dilated spatial pyramid pooling for prostate MR images segmentation. Comput. Assist. Surg. 24(sup2), 13–19. https:// doi. org/ 10. 1080/ 24699 322. 2019. 16490 69 (2019). 36. Geng, L., Zhang, S., Tong, J. & Xiao, Z. Lung segmentation method with dilated convolution based on VGG-16 network. Comput. Assist. Surg. 24(sup2), 27–33. https:// doi. org/ 10. 1080/ 24699 322. 2019. 16490 71 (2019). 37. W ang, B. et al. Deeply supervised 3D fully convolutional networks with group dilated convolution for automatic <scp>MRI</scp> prostate segmentation. Med. Phys. 46(4), 1707–1718. https:// doi. org/ 10. 1002/ mp. 13416 (2019). 38. Ali, M. J. et al. Enhancing breast pectoral muscle segmentation performance by using skip connections in fully convolutional network. Int. J. Imaging Syst. Technol. https:// doi. org/ 10. 1002/ ima. 22410 (2020). 39. Zhou, Z., Siddiquee, M. M. R., Tajbakhsh, N. & Liang, J. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans. Med. Imaging 39(6), 1856–1867. https:// doi. org/ 10. 1109/ TMI. 2019. 29596 09 (2020). 40. Siddiqi, M. H., Ali, R., Khan, A. M., Park, Y. T. & Lee, S. Human facial expression recognition using stepwise linear discriminant analysis and hidden conditional random fields. IEEE Trans. Image Process. 24(4), 1386–1398. https:// doi. org/ 10. 1109/ TIP. 2015. 24053 46 (2015). 41. D. Marcheggiani, O. Täckström, A. Esuli, and F. Sebastiani, “Hierarchical multi-label conditional random fields for aspect-oriented opinion mining,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8416 LNCS, pp. 273–285, 2014, doi: https:// doi. org/ 10. 1007/ 978-3- 319- 06028-6_ 23. 42. T. Zhou, S. Ruan, Y. Guo, and S. Canu, “A Multi-Modality Fusion Network Based on Attention Mechanism for Brain Tumor Seg- mentation,” in Proceedings ‑ International Symposium on Biomedical Imaging, Apr. 2020, vol. 2020-April, pp. 377–380, doi: https:// doi. org/ 10. 1109/ ISBI4 5749. 2020. 90983 92. 43. Tian, C. et al. Attention-guided CNN for image denoising. Neural Netw. 124, 117–129. https:// doi. org/ 10. 1016/j. neunet. 2019. 12. 024 (2020). 44. Chen, X., Zheng, L., Zhao, C., Wang, Q. & Li, M. RRGC CA N: re-ranking via graph convolution channel attention network for person re-identification. IEEE Access 8, 131352–131360. https:// doi. org/ 10. 1109/ ACCESS. 2020. 30096 53 (2020). 45. Fang, B., Li, Y., Zhang, H. & Chan, J. Hyperspectral images classification based on dense convolutional networks with spectral-wise attention mechanism. Remote Sens. 11(2), 159. https:// doi. org/ 10. 3390/ rs110 20159 (2019). 46. Yao, H., Zhang, X., Zhou, X. & Liu, S. Parallel structure deep neural network using CNN and RNN with an attention mechanism for breast cancer histology image classification. Cancers (Basel) 11(12), 1901. https:// doi. org/ 10. 3390/ cance rs111 21901 (2019). 47. Lei, B. et al. Self-co-attention neural network for anatomy segmentation in whole breast ultrasound. Med. Image Anal. 64, 101753. https:// doi. org/ 10. 1016/j. media. 2020. 101753 (2020). 48. Chen, J., Liu, Z., Wang, H., Nunez, A. & Han, Z. Automatic defect detection of fasteners on the catenary support device using deep convolutional neural network. IEEE Trans. Instrum. Meas. 67(2), 257–269. https:// doi. org/ 10. 1109/ TIM. 2017. 27753 45 (2018). 49. Zhong, J., Liu, Z., Han, Z., Han, Y. & Zhang, W. A CNN-based defect inspection method for catenary split pins in high-speed railway. IEEE Trans. Instrum. Meas. 68(8), 2849–2860. https:// doi. org/ 10. 1109/ TIM. 2018. 28713 53 (2019). 50. A. Mahmood et al., “Deep Learning for Coral Classification,” in Handbook of Neural Computation , Elsevier Inc., 2017, pp. 383–401. 51. Bengio, Y . Practical Recommendations for Gradient‑Based Training of Deep Architectures 437–478 (Springer, 2012). 52. A. D. Torres, H. Yan, A. H. Aboutalebi, A. Das, L. Duan, and P. Rad, “Patient facial emotion recognition and sentiment analysis using secure cloud with hardware acceleration,” in Computational Intelligence for Multimedia Big Data on the Cloud with Engineer‑ ing Applications, Elsevier, 2018, pp. 61–89. Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 16 Vol:.(1234567890) www.nature.com/scientificreports/ 53. Dolz, J., Desrosiers, C. & Ben Ayed, I. 3D fully convolutional networks for subcortical segmentation in MRI: A large-scale study. Neuroimage 170, 456–470. https:// doi. org/ 10. 1016/j. neuro image. 2017. 04. 039 (2018). 54. Yin, W., Schütze, H., Xiang, B. & Zhou, B. ABCNN: attention-based convolutional neural network for modeling sentence pairs. Trans. Assoc. Comput. Linguist. 4, 259–272. https:// doi. org/ 10. 1162/ tacl_a_ 00097 (2016). 55. Srivastava, N., Hinton, G., Krizhevsky, A., Ilya, S. & Salakhutdinov, R. Dropout: a simple way to prevent neural networks from overt fi ting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014). 56. F. Husain, B. Dellen, and C. Torras, “Scene Understanding Using Deep Learning,” in Handbook of Neural Computation, Elsevier Inc., 2017, pp. 373–382. 57. Wahab, N., Khan, A. & Lee, Y. S. Two-phase deep convolutional neural network for reducing class skewness in histopathological images based breast cancer detection. Comput. Biol. Med. 85, 86–97. https:// doi. org/ 10. 1016/j. compb iomed. 2017. 04. 012 (2017). 58. Ranjbarzadeh, R. & Saadi, S. B. Automated liver and tumor segmentation based on concave and convex points using fuzzy c-means and mean shift clustering. Meas. J. Int. Meas. Confed. https:// doi. org/ 10. 1016/j. measu rement. 2019. 107086 (2020). 59. Karimi, N., Ranjbarzadeh Kondrood, R. & Alizadeh, T. An intelligent system for quality measurement of Golden Bleached raisins using two comparative machine learning algorithms. Meas. J. Int. Meas. Confed. 107, 68–76. https://d oi.o rg/1 0.1 016/j.m easur ement. 2017. 05. 009 (2017). 60. Pourasad, Y., Ranjbarzadeh, R. & Mardani, A. A new algorithm for digital image encryption based on chaos theory. Entropy 23(3), 341. https:// doi. org/ 10. 3390/ e2303 0341 (2021). Acknowledgements e a Th uthor Malika Bendechache is supported, in part, by Science Foundation Ireland (SFI) under the grants No. 13/RC/2094\_P2 (Lero) and 13/RC/2106\_P2 (ADAPT). Author contributions R.R contributed to Conceptualization, Investigation, Methodology, Software, Writing- Reviewing and Editing (original draft), Validation. A.B.K contributed to Software, Formal analysis, Investigation, Data curation. S.J.G contributed to Formal analysis, Investigation. S.A contributed to Investigation, Data curation. M.N contributed to conceptualization and writing (original draft). M.B contributed to Supervisor, Reviewing and Editing, Validation. Competing interests The authors declare no competing interests. Additional information Correspondence and requests for materials should be addressed to M.N. Reprints and permissions information is available at www.nature.com/reprints. Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. © The Author(s) 2021 Scientific Reports | (2021) 11:10930 | https://doi.org/10.1038/s41598-021-90428-8 17 Vol.:(0123456789)
Scientific Reports – Springer Journals
Published: May 25, 2021
You can share this free article with as many people as you like with the url below! We hope you enjoy this feature!
Read and print from thousands of top scholarly journals.
Already have an account? Log in
Bookmark this article. You can see your Bookmarks on your DeepDyve Library.
To save an article, log in first, or sign up for a DeepDyve account if you don’t already have one.
Copy and paste the desired citation format or use the link below to download a file formatted for EndNote
Access the full text.
Sign up today, get DeepDyve free for 14 days.
All DeepDyve websites use cookies to improve your online experience. They were placed on your computer when you launched this website. You can change your cookie settings through your browser.