Multimodal Deep Learning for Lung Cancer Analysis Using Data Visualisation
Main Article Content
Abstract
Lung cancer is still one of the leading causes of cancer deaths worldwide. Early and accurate risk prediction can help doctors make better decisions and improve patient outcomes. In this work, we develop a deep-learning framework that combines clinical records, imaging features, and survey data to predict lung cancer and its prognosis. For imaging, we use pretrained convolutional neural networks to extract features from CT and X-ray images. For clinical history, we use recurrent models, and for structured data, we apply gradient-ensemble models. We combine these features into a fully connected layer and fine-tune the model end-to-end. We test our model on several open datasets, including Kaggle lung CT sets, IQ-OTHNCCD, and a diagnostic survey dataset. We report accuracy, precision, recall, F1, and ROC-AUC. To ensure a fair evaluation, we use stratified cross-validation, tune hyperparameters, and run ablation studies to see how each data type contributes. Our combined model consistently outperforms both image-only and tabular-only models. It improves ROC-AUC and F1 scores and reduces false negatives, which is especially important for diagnosis. We also provide interactive visualisations in Looker Studio to show which features matter most, how risk is distributed, and confusion matrices, helping clinicians understand the model. We use statistical tests to confirm our improvements and discuss ways to make predictions more understandable for clinical use. Our results demonstrate that combining different data types enhances accuracy and provides valuable insights that support informed clinical decisions. We also discuss limitations, such as differences between datasets and label noise, and suggest further experiments and external validation to move closer to clinical use.
Downloads
Article Details
Section

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
How to Cite
References
Kourou, K., Exarchos, T. P., Exarchos, K. P., Karamanis, M. V., & Fotiadis, D. I. (2015). Machine learning applications in cancer prognosis and prediction. Computational and Structural Biotechnology Journal, 13, 8–17. DOI: https://doi.org/10.1016/j.csbj.2014.11.005
Esteva, A., Kuprel, B., Novoa, R. A., et al. (2017). Dermatologist-level classification of skin cancer with deep neural networks. Nature, 542(7639), 115–118. DOI: https://doi.org/10.1038/nature21056
Litjens, G., Kooi, T., Bejnordi, B. E., et al. (2017). A survey on deep learning in medical image analysis. Medical Image Analysis, 42, 60–88.
DOI: https://doi.org/10.1016/j.media.2017.07.005
Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L. H., & Aerts, H. J. W. L. (2018). Artificial intelligence in radiology. Nature Reviews Cancer, 18(8), 500–510. DOI: https://doi.org/10.1038/s41568-018-0016-5
Lundervold, A. S., & Lundervold, A. (2019). An overview of deep learning in medical imaging, focusing on MRI. Zeitschrift für Medizinische Physik, 29(2), 102–127. DOI: https://doi.org/10.1016/j.zemedi.2018.11.002
Rajpurkar, P., Irvin, J., Zhu, K., et al. (2017). CheXNet: Radiologist-level pneumonia detection on chest X-rays with deep learning. arXiv preprint arXiv:1711.05225. DOI: https://arxiv.org/abs/1711.05225
Nahm, F. S. (2022). Receiver operating characteristic curve: overview and practical guide for clinicians. Korean Journal of Anesthesiology, 75(1), 25–36. DOI: https://doi.org/10.4097/kja.21209
Muschelli, J. (2020). ROC and AUC with a binary predictor: A potentially misleading measure. Computational and Mathematical Methods in Medicine, 2020, 1–8. DOI: https://doi.org/10.1155/2020/9482167
Chen, T., & Guestrin, C. (2016). XGBoost: A scalable tree boosting system. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 785–794. DOI: https://doi.org/10.1145/2939672.2939785
Shickel, B., Tighe, P. J., Bihorac, A., & Rashidi, P. (2018). Deep EHR: A survey of recent advances in deep learning techniques for electronic health record analysis. IEEE Journal of Biomedical and Health Informatics, 22(5), 1589–1604. DOI: https://doi.org/10.1109/JBHI.2017.2767063
Selvaraju, R. R., Cogswell, M., Das, A., et al. (2017). Grad-CAM: Visual explanations from deep networks via gradient-based localisation. Proceedings of the IEEE International Conference on Computer Vision (ICCV), 618–626. DOI: https://doi.org/10.1109/ICCV.2017.74
Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in Neural Information Processing Systems (NeurIPS), 4765–4774. DOI: https://doi.org/10.48550/arXiv.1705.07874
Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of Big Data, 6(1), 60.
DOI: https://doi.org/10.1186/s40537-019-0197-0
Simonyan, K., & Zisserman, A. (2015). Intense convolutional networks for large-scale image recognition. International Conference on Learning Representations (ICLR). https://arxiv.org/abs/1409.1556
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770–778. DOI: https://doi.org/10.1109/CVPR.2016.90
Wu, B., Chen, W., Fan, Y., Zhang, Y., Hou, J., Liu, J., & Zhang, T. (2019). Tencent ML-Images: A large-scale multi-label image database for visual representation learning. arXiv preprint arXiv:1901.01703. DOI: https://doi.org/10.48550/arXiv.1901.01703
Cai, T. T., & Ma, R. (2021). Theoretical foundations of t-SNE for visualising high-dimensional clustered data. arXiv preprint arXiv:2105.07536. DOI: https://doi.org/10.48550/arXiv.2105.07536
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. DOI: https://doi.org/10.1145/2939672.2939778
Chen, J., et al. (2021). Multimodal learning in medical imaging: A review of methods and applications. Computerised Medical Imaging and Graphics, 91, 101902. DOI: https://doi.org/10.1016/j.compmedimag.2021.101902
 
							 
						 
            
         
             
            