Lite-FBCN: Lightweight Fast Bilinear Convolutional Network for Brain Disease Classification from MRI Image
Abstract
Achieving high accuracy with computational efficiency in brain disease classification from Magnetic Resonance Imaging (MRI) scans is challenging, particularly when both coarse and fine-grained distinctions are crucial. Current deep learning methods often struggle to balance accuracy with computational demands. We propose Lite-FBCN, a novel Lightweight Fast Bilinear Convolutional Network designed to address this issue. Unlike traditional dual-network bilinear models, Lite-FBCN utilizes a single-network architecture, significantly reducing computational load. Lite-FBCN leverages lightweight, pre-trained CNNs fine-tuned to extract relevant features and incorporates a channel reducer layer before bilinear pooling, minimizing feature map dimensionality and resulting in a compact bilinear vector. Extensive evaluations on cross-validation and hold-out data demonstrate that Lite-FBCN not only surpasses baseline CNNs but also outperforms existing bilinear models. Lite-FBCN with MobileNetV1 attains 98.10% accuracy in cross-validation and 69.37% on hold-out data (a 3% improvement over the baseline). UMAP visualizations further confirm its effectiveness in distinguishing closely related brain disease classes. Moreover, its optimal trade-off between performance and computational efficiency positions Lite-FBCN as a promising solution for enhancing diagnostic capabilities in resource-constrained and or real-time clinical environments.
Downloads
References
J. M. Valverde et al., Transfer Learning in Magnetic Resonance Brain Imaging: A Systematic Review, J. Imaging, vol. 7, no. 4, p. 66, 2021, doi: 10.3390/jimaging7040066.
A. Gudigar, U. Raghavendra, E. J. Ciaccio, N. Arunkumar, E. Abdulhay, and U. R. Acharya, Automated categorization of multi-class brain abnormalities using decomposition techniques with MRI images: A comparative study, IEEE Access, vol. 7, pp. 28498–28509, 2019, doi: 10.1109/ACCESS.2019.2901055.
M. Narazani, I. Sarasua, S. Pölsterl, A. Lizarraga, I. Yakushev, and C. Wachinger, Is a PET All You Need? A Multi-modal Study for Alzheimer’s Disease Using 3D CNNs, Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 13431 LNCS, pp. 66–76, 2022, doi: 10.1007/978-3-031-16431-6_7.
J. Zhang, B. Zheng, A. Gao, X. Feng, D. Liang, and X. Long, A 3D densely connected convolution neural network with connection-wise attention mechanism for Alzheimer’s disease classification, Magn. Reson. Imaging, vol. 78, pp. 119–126, 2021, doi: 10.1016/J.MRI.2021.02.001.
H. P. A. Tjahyaningtijas, A. K. Nugroho, C. V. Angkoso, I. K. E. Purnama, and M. H. Purnomo, Automatic Segmentation on Glioblastoma Brain Tumor Magnetic Resonance Imaging Using Modified U-Net, Emit. Int. J. Eng. Technol., vol. 8, no. 1, pp. 161–177, 2020, doi: 10.24003/emitter.v8i1.505.
G. Litjens et al., A survey on deep learning in medical image analysis, Med. Image Anal., vol. 42, pp. 60–88, 2017, doi: https://doi.org/10.1016/j.media.2017.07.005.
M. Rajnoha, R. Burget, and L. Povoda, Image Background Noise Impact on Convolutional Neural Network Training, in 2018 10th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2018, pp. 1–4, doi: 10.1109/ICUMT.2018.8631242.
T.-Y. Lin, A. RoyChowdhury, and S. Maji, Bilinear CNN Models for Fine-Grained Visual Recognition, in 2015 IEEE International Conference on Computer Vision (ICCV), 2015, pp. 1449–1457, doi: 10.1109/ICCV.2015.170.
L. C. Neural and S. Classification, An Efficient and Lightweight Convolutional Neural Network for Remote Sensing Image Scene Classificatio, 2020.
D. Yu, Q. Xu, H. Guo, C. Zhao, Y. Lin, and D. Li, An Efficient and Lightweight Convolutional Neural Network for Remote Sensing Image Scene Classification, Sensors (Basel)., vol. 20, no. 7, 2020, doi: 10.3390/s20071999.
M. Lu et al., Intelligent Grading of Tobacco Leaves Using an Improved Bilinear Convolutional Neural Network, IEEE Access, vol. 11, pp. 68153–68170, 2023, doi: 10.1109/ACCESS.2023.3292340.
A. J. Prakash and P. Prakasam, An intelligent fruits classification in precision agriculture using bilinear pooling convolutional neural networks, Vis. Comput., vol. 39, no. 5, pp. 1765–1781, 2023, doi: 10.1007/s00371-022-02443-z.
D. J. Rumala et al., Bilinear MobileNets for Multi-class Brain Disease Classification Based on Magnetic Resonance Images, in 2021 IEEE Region 10 Symposium (TENSYMP), 2021, pp. 152–157, doi: 10.1109/TENSYMP52854.2021.9550987.
B. Harangi, Skin lesion classification with ensembles of deep convolutional neural networks, J. Biomed. Inform., vol. 86, pp. 25–32, Oct. 2018, doi: 10.1016/J.JBI.2018.08.006.
D. S. Luz, T. J. B. Lima, R. R. V. Silva, D. M. V. Magalhães, and F. H. D. Araujo, Automatic detection metastasis in breast histopathological images based on ensemble learning and color adjustment, Biomed. Signal Process. Control, vol. 75, p. 103564, 2022, doi: 10.1016/J.BSPC.2022.103564.
Y. Oh, S. Park, and J. C. Ye, Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets, IEEE Trans. Med. Imaging, vol. 39, no. 8, pp. 2688–2700, 2020, doi: 10.1109/TMI.2020.2993291.
G. Varoquaux and V. Cheplygina, Machine learning for medical imaging: methodological failures and recommendations for the future, npj Digit. Med., vol. 5, no. 1, p. 48, 2022, doi: 10.1038/s41746-022-00592-y.
A. G. Howard et al., MobileNets: Efficient convolutional neural networks for mobile vision applications, arXiv, 2017.
M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L. C. Chen, MobileNetV2: Inverted Residuals and Linear Bottlenecks, Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 4510–4520, 2018, doi: 10.1109/CVPR.2018.00474.
M. Tan and Q. Le, EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, in Proceedings of the 36th International Conference on Machine Learning, 2019, vol. 97, pp. 6105–6114, [Online]. Available: https://proceedings.mlr.press/v97/tan19a.html.
R. Yu, C. Pan, X. Fei, M. Chen, and D. Shen, Multi-Graph Attention Networks With Bilinear Convolution for Diagnosis of Schizophrenia, IEEE J. Biomed. Heal. Informatics, vol. 27, no. 3, pp. 1443–1454, 2023, doi: 10.1109/JBHI.2022.3229465.
S. M. Abd-Alhalem, H. S. Marie, W. El-Shafai, T. Altameem, R. S. Rathore, and T. M. Hassan, Cervical cancer classification based on a bilinear convolutional neural network approach and random projection, Eng. Appl. Artif. Intell., vol. 127, p. 107261, 2024, doi: https://doi.org/10.1016/j.engappai.2023.107261.
W. Liu, M. Juhas, and Y. Zhang, Fine-Grained Breast Cancer Classification With Bilinear Convolutional Neural Networks (BCNNs), Front. Genet., vol. 11, no. September, pp. 1–12, 2020, doi: 10.3389/fgene.2020.547327.
C. Fan et al., Bilinear neural network with 3-D attention for brain decoding of motor imagery movements from the human EEG, Cogn. Neurodyn., vol. 15, no. 1, pp. 181–189, 2021, doi: 10.1007/s11571-020-09649-8.
D. J. Rumala et al., Activation Functions Evaluation to Improve Performance of Convolutional Neural Network in Brain Disease Classification Based on Magnetic Resonance Images, in 2020 International Conference on Computer Engineering, Network, and Intelligent Multimedia (CENIM), Nov. 2020, pp. 402–407, doi: 10.1109/CENIM51130.2020.9297862.
D. R. Nayak, R. Dash, B. Majhi, and U. R. Acharya, Application of fast curvelet Tsallis entropy and kernel random vector functional link network for automated detection of multiclass brain abnormalities, Comput. Med. Imaging Graph., vol. 77, p. 101656, Oct. 2019, doi: 10.1016/J.COMPMEDIMAG.2019.101656.
M. Talo, U. B. Baloglu, Ö. Yıldırım, and U. Rajendra Acharya, Application of deep transfer learning for automated brain abnormality classification using MR images, Cogn. Syst. Res., vol. 54, pp. 176–188, 2019, doi: https://doi.org/10.1016/j.cogsys.2018.12.007.
D. R. Nayak, R. Dash, and B. Majhi, Automated diagnosis of multi-class brain abnormalities using MRI images: A deep convolutional neural network based method, Pattern Recognit. Lett., vol. 138, pp. 385–391, 2020, doi: 10.1016/j.patrec.2020.04.018.
T. K. Dutta, D. R. Nayak, and Y.-D. Zhang, ARM-Net: Attention-guided residual multiscale CNN for multiclass brain tumor classification using MR images, Biomed. Signal Process. Control, vol. 87, p. 105421, 2024, doi: https://doi.org/10.1016/j.bspc.2023.105421.
D. J. Rumala, P. van Ooijen, R. F. Rachmadi, A. D. Sensusiati, and I. K. E. Purnama, Deep-Stacked Convolutional Neural Networks for Brain Abnormality Classification Based on MRI Images, J. Digit. Imaging, 2023, doi: 10.1007/s10278-023-00828-7.
A. Kursad Poyraz, S. Dogan, E. Akbal, and T. Tuncer, Automated brain disease classification using exemplar deep features, Biomed. Signal Process. Control, vol. 73, p. 103448, Mar. 2022, doi: 10.1016/J.BSPC.2021.103448.
M. F. Siddiqui, G. Mujtaba, A. W. Reza, and L. Shuib, Multi-class disease classification in brain MRIs using a computer-aided diagnostic system, Symmetry (Basel)., vol. 9, no. 3, pp. 1–14, 2017, doi: 10.3390/sym9030037.
M. Talo, O. Yildirim, U. B. Baloglu, G. Aydin, and U. R. Acharya, Convolutional neural networks for multi-class brain disease detection using MRI images, Comput. Med. Imaging Graph., vol. 78, p. 101673, 2019, doi: 10.1016/j.compmedimag.2019.101673.
J. V. Shanmugam, B. Duraisamy, B. C. Simon, and P. Bhaskaran, Alzheimer’s Disease Classification Using Pre-Trained Deep Networks, Biomed. Signal Process. Control, vol. 71, p. 103217, 2022, doi: 10.1016/j.bspc.2021.103217.
R. Solovyev, A. A. Kalinin, and T. Gabruseva, 3D convolutional neural networks for stalled brain capillary detection, Comput. Biol. Med., vol. 141, p. 105089, 2022, doi: 10.1016/j.compbiomed.2021.105089.
S. Lu, Z. Lu, and Y. D. Zhang, Pathological brain detection based on AlexNet and transfer learning, J. Comput. Sci., vol. 30, pp. 41–47, 2019, doi: 10.1016/j.jocs.2018.11.008.
A. Gamal, M. Elattar, and S. Selim, Automatic Early Diagnosis of Alzheimer’s Disease Using 3D Deep Ensemble Approach, IEEE Access, vol. 10, pp. 115974–115987, 2022, doi: 10.1109/ACCESS.2022.3218621.
A. Das, S. K. Mohapatra, and M. N. Mohanty, Design of deep ensemble classifier with fuzzy decision method for biomedical image classification, Appl. Soft Comput., vol. 115, p. 108178, Jan. 2022, doi: 10.1016/J.ASOC.2021.108178.
V. Cheplygina, Cats or CAT Scans: Transfer Learning from Natural or Medical Image Source Data Sets?, Curr. Opin. Biomed. Eng., vol. 9, pp. 21–27, Mar. 2019, doi: 10.1016/j.cobme.2018.12.005.
B. H. Menze et al., The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS), IEEE Trans. Med. Imaging, vol. 34, no. 10, pp. 1993–2024, 2015, doi: 10.1109/TMI.2014.2377694.
S. Bakas et al., Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features, Sci. Data, vol. 4, no. 1, p. 170117, 2017, doi: 10.1038/sdata.2017.117.
D. Smith and F. Gaillard, Multiple sclerosis, Radiopaedia.org, May 2008, doi: 10.53347/RID-1700.
D. Bell and A. Stanislavsky, HIV/AIDS (CNS manifestations), Radiopaedia.org, Oct. 2010, doi: 10.53347/RID-11079.
Copyright (c) 2024 EMITTER International Journal of Engineering Technology
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
The copyright to this article is transferred to Politeknik Elektronika Negeri Surabaya(PENS) if and when the article is accepted for publication. The undersigned hereby transfers any and all rights in and to the paper including without limitation all copyrights to PENS. The undersigned hereby represents and warrants that the paper is original and that he/she is the author of the paper, except for material that is clearly identified as to its original source, with permission notices from the copyright owners where required. The undersigned represents that he/she has the power and authority to make and execute this assignment. The copyright transfer form can be downloaded here .
The corresponding author signs for and accepts responsibility for releasing this material on behalf of any and all co-authors. This agreement is to be signed by at least one of the authors who have obtained the assent of the co-author(s) where applicable. After submission of this agreement signed by the corresponding author, changes of authorship or in the order of the authors listed will not be accepted.
Retained Rights/Terms and Conditions
- Authors retain all proprietary rights in any process, procedure, or article of manufacture described in the Work.
- Authors may reproduce or authorize others to reproduce the work or derivative works for the author’s personal use or company use, provided that the source and the copyright notice of Politeknik Elektronika Negeri Surabaya (PENS) publisher are indicated.
- Authors are allowed to use and reuse their articles under the same CC-BY-NC-SA license as third parties.
- Third-parties are allowed to share and adapt the publication work for all non-commercial purposes and if they remix, transform, or build upon the material, they must distribute under the same license as the original.
Plagiarism Check
To avoid plagiarism activities, the manuscript will be checked twice by the Editorial Board of the EMITTER International Journal of Engineering Technology (EMITTER Journal) using iThenticate Plagiarism Checker and the CrossCheck plagiarism screening service. The similarity score of a manuscript has should be less than 25%. The manuscript that plagiarizes another author’s work or author's own will be rejected by EMITTER Journal.
Authors are expected to comply with EMITTER Journal's plagiarism rules by downloading and signing the plagiarism declaration form here and resubmitting the form, along with the copyright transfer form via online submission.