We present a semi-supervised domain adaptation framework for brain vessel segmentation from different image modalities. Existing state-of-the-art methods focus on a single modality, despite the wide range of available cerebrovascular imaging techniques. This can lead to significant distribution shifts that negatively impact the generalization across modalities. By relying on annotated angiographies and a limited number of annotated venographies, our framework accomplishes image-to-image translation and semantic segmentation, leveraging a disentangled and semantically rich latent space to represent heterogeneous data and perform image-level adaptation from source to target domains. Moreover, we reduce the typical complexity of cycle-based architectures and minimize the use of adversarial training, which allows us to build an efficient and intuitive model with stable training. We evaluate our method on magnetic resonance angiographies and venographies. While achieving state-of-the-art performance in the source domain, our method attains a Dice score coefficient in the target domain that is only 8.9% lower, highlighting its promising potential for robust cerebrovascular image segmentation across different modalities.
A2V: A semi-supervised domain adaptation framework for brain vessel segmentation via two-phase training angiography-to-venography
BMVC 2023, 34th British Machine Vision Conference, 20-24 November 2023, Aberdeen, UK
© EURECOM. Personal use of this material is permitted. The definitive version of this paper was published in BMVC 2023, 34th British Machine Vision Conference, 20-24 November 2023, Aberdeen, UK and is available at :
PERMALINK : https://www.eurecom.fr/publication/7409