Abstract
Medical imaging is a dynamic domain where new acquisition protocols are regularly developed and employed to meet changing clinical needs. Deep learning models for medical image segmentation have proven to be a valuable tool for medical image processing. Creating such a model from scratch requires a lot of effort in terms of annotating new types of data and model training. Therefore, the amount of annotated training data for the new imaging protocol might still be limited. In this work we propose a framework for segmentation of images acquired with a new imaging protocol(contrast-enhanced lung CT) that does not require annotating training data in the new target domain. Instead, the framework leverages the previously developed models, data and annotations in a related source domain. Using contrast-enhanced lung CT data as a target data we demonstrate that unpaired image translation from the non-contrast enhanced source data, combined with self-supervised pretraining achieves 0.726 Dice Score for the COVID-19 lesion segmentation task on the target data, without the necessity to annotate any target data for the model training.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Ackermann, M., et al.: Pulmonary vascular endothelialitis, thrombosis, and angiogenesis in COVID-19. N. Engl. J. Med. 383(2), 120–128 (2020)
Amodio, M., Krishnaswamy, S.: Travelgan: image-to-image translation by transformation vector learning. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8975–8984 (2019)
Bai, W., et al.: Self-supervised learning for cardiac MR image segmentation by anatomical position prediction. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11765, pp. 541–549. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32245-8_60
Baur, C., Albarqouni, S., Navab, N.: Generating highly realistic images of skin lesions with GANs. In: Stoyanov, D., et al. (eds.) CARE/CLIP/OR 2.0/ISIC -2018. LNCS, vol. 11041, pp. 260–267. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01201-4_28
Benaim, S., Wolf, L.: One-sided unsupervised domain mapping. Adv. Neural Inf. Proc. Syst. 30 (2017)
Bermúdez, C., Plassard, A.J., Davis, L.T., Newton, A.T., Resnick, S.M., Landman, B.A.: Learning implicit brain MRI manifolds with deep learning. In: Medical Imaging (2018)
Calimeri, F., Marzullo, A., Stamile, C., Terracina, G.: Biomedical data augmentation using generative adversarial neural networks. In: Lintas, A., Rovetta, S., Verschure, P.F.M.J., Villa, A.E.P. (eds.) ICANN 2017. LNCS, vol. 10614, pp. 626–634. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68612-7_71
Cardoso, M.J., et al.: MONAI: an open-source framework for deep learning in healthcare. ArXiv:2211.02701 (2022)
Chaitanya, K., Erdil, E., Karani, N., Konukoglu, E.: Contrastive learning of global and local features for medical image segmentation with limited annotations. ArXiv:2006.10511 (2020)
Choi, J.W.: Using out-of-the-box frameworks for contrastive unpaired image translation for vestibular schwannoma and cochlea segmentation: an approach for the CrossMoDA challenge. In: Crimi, A., Bakas, S. (eds.) Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 7th International Workshop, BrainLes 2021, Held in Conjunction with MICCAI 2021, Virtual Event, September 27, 2021, Revised Selected Papers, Part II, pp. 509–517. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09002-8_44
Connell, M., et al.: Unsupervised segmentation and quantification of COVID-19 lesions on computed tomography scans using cyclegan. Methods 205, 200–209 (2022). https://doi.org/10.1016/j.ymeth.2022.07.007
Doersch, C., Gupta, A.K., Efros, A.A.: Unsupervised visual representation learning by context prediction. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp. 1422–1430 (2015)
Fu, H., Gong, M., Wang, C., Batmanghelich, K., Zhang, K., Tao, D.: Geometry-consistent generative adversarial networks for one-sided unsupervised domain mapping. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2422–2431 (2018)
Guan, H., Liu, M.: Domain adaptation for medical image analysis: a survey. IEEE Trans. Biomed. Eng. 69, 1173–1185 (2021)
Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., Hochreiter, S.: GANs trained by a two time-scale update rule converge to a local nash equilibrium. Adv. Neural Inf. Proc. Syst. 30 (2017)
Hofmanninger, J., Prayer, F., Pan, J., Röhrich, S., Prosch, H., Langs, G.: Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem. Eur. Radiol. Exp. 4(1), 1–13 (2020). https://doi.org/10.1186/s41747-020-00173-2
Isensee, F., Jaeger, P.F., Kohl, S.A.A., Petersen, J., Maier-Hein, K.H.: nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat. Methods 18(2), 203–211 (2021). https://doi.org/10.1038/s41592-020-01008-z
Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 5967–5976 (2016)
Kazeminia, S., et al.: GANs for medical image analysis. Artif. Intell. Med. 109, 101938 (2018)
Kong, L., Lian, C., Huang, D., Li, Z., Hu, Y., Zhou, Q.: Breaking the dilemma of medical image-to-image translation. Adv. Neural Inf. Proc. Syst. 34, 1964–1978 (2021)
Li, Z., et al.: Contrastive and selective hidden embeddings for medical image segmentation. IEEE Trans. Med. Imaging 41(11), 3398–3410 (2022)
Morozov, S., et al.: MosMedData: chest CT scans with COVID-19 related findings dataset. medRxiv (2020). https://doi.org/10.1101/2020.05.20.20100362
Noroozi, M., Favaro, P.: Unsupervised learning of visual representations by solving jigsaw puzzles. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 69–84. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46466-4_5
Park, T., Efros, A.A., Zhang, R., Zhu, J.-Y.: Contrastive learning for unpaired image-to-image translation. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 319–345. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_19
Peng An, Sheng Xu, E.B.T.: CT images in COVID-19 [data set]. Cancer Imaging Arch. 10 (2020), https://doi.org/10.7937/TCIA.2020.GQRY-NC81
Shin, H.-C., et al.: Medical image synthesis for data augmentation and anonymization using generative adversarial networks. In: Gooya, A., Goksel, O., Oguz, I., Burgos, N. (eds.) SASHIMI 2018. LNCS, vol. 11037, pp. 1–11. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00536-8_1
Skandarani, Y., Jodoin, P.M., Lalande, A.: GANs for medical image synthesis: an empirical study. ArXiv:2105.05318 (2021)
Wang, G., et al.: A noise-robust framework for automatic segmentation of COVID-19 pneumonia lesions from CT images. IEEE Trans. Med. Imaging 39(8), 2653–2663 (2020)
Yan, K., et al.: SAM: Self-supervised learning of pixel-wise anatomical embeddings in radiological images. IEEE Trans. Med. Imaging 41(10), 2658–2669 (2020)
Yang, X., He, X., Liang, Y., Yang, Y., Zhang, S., Xie, P.: Transfer learning or self-supervised learning? a tale of two pretraining paradigms. ArXiv:2007.04234 (2020)
Zhou, Z., et al.: Models Genesis: generic autodidactic models for 3d medical image analysis. In: Shen, D., et al. (eds.) MICCAI 2019. LNCS, vol. 11767, pp. 384–393. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32251-9_42
Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2242–2251 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
1 Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kvasnytsia, M., Berenguer, A.D., Sahli, H., Vandemeulebroucke, J. (2023). COVID-19 Lesion Segmentation Framework for the Contrast-Enhanced CT in the Absence of Contrast-Enhanced CT Annotations. In: Xue, Z., et al. Medical Image Learning with Limited and Noisy Data. MILLanD 2023. Lecture Notes in Computer Science, vol 14307. Springer, Cham. https://doi.org/10.1007/978-3-031-44917-8_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-44917-8_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47196-4
Online ISBN: 978-3-031-44917-8
eBook Packages: Computer ScienceComputer Science (R0)