Skip to content

Elkhiat15/Continual-FM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

68 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Continual-FM

Foundation Models as Class-Incremental Learners for Dermatological Image Classification [Arxiv Paper] [Cite]

Abstract

Class-Incremental Learning (CIL) aims to learn new classes over time without forgetting previously acquired knowledge. The emergence of foundation models (FM) pretrained on large datasets presents new opportunities for CIL by offering rich, transferable representations. However, their potential for enabling incremental learning in dermatology remains largely unexplored. In this paper, we systematically evaluate frozen FMs pretrained on large-scale skin lesion datasets for CIL in dermatological disease classification. We propose a simple yet effective approach where the backbone remains frozen, and a lightweight MLP is trained incrementally for each task. This setup achieves state-of-the-art performance without forgetting, outperforming regularization, replay, and architecture-based methods. To further explore the capabilities of frozen FMs, we examine zero-training scenarios using nearest mean classifiers with prototypes derived from their embeddings. Through extensive ablation studies, we demonstrate that this prototype-based variant can also achieve competitive results. Our findings highlight the strength of frozen FMs for continual learning in dermatology and support their broader adoption in real-world medical applications.


Data & Models

Datasets

Our experiments are conducted on three publicly available dermatology datasets. Each dataset is partitioned into tasks with mutually exclusive class labels.

Dataset Source Description
HAM10000 (HAM) Download Dermoscopic images of 7 pigmented lesion classes.
Dermofit (DMF) Download High-quality skin lesion images collected under standardised conditions with internal colour standards.
Derm7pt (D7P) Download Dermoscopic dataset designed to follow the 7-point skin lesion malignancy checklist.

Foundation Models

All models are used as frozen feature extractors without further fine-tuning. Extracted embeddings are later used to train lightweight classifiers incrementally.

Model Source / Description
Derm Google Derm Foundation Model, trained on over 400 skin conditions.
PanDerm PanDerm, pretrained on millions of clinical and dermoscopic dermatology images.
CLIP CLIP ViT-L/14, pretrained on large-scale image-text pairs.

How To Run

Setup Instructions

1. Create a Virtual Environment

For macOS/Linux:

python3 -m venv venv
source venv/bin/activate

For Windows:

python -m venv venv
venv\Scripts\activate

2. Install Dependencies

After activating the virtual environment, run:

pip install -r requirements.txt

Feature Extraction (Optional)

You have two options for obtaining the image embeddings needed for evaluation:

βœ… Option 1: Use Precomputed Embeddings (Recommended)

  • Download all precomputed embeddings (.csv files) for all datasets and models directly from this link: Download Embeddings

  • Once downloaded, place the files inside the outputs/ directory and skip to the Run Experiment section.

πŸ“‚ Expected Directory Structure:

outputs/
β”œβ”€β”€ derm_ham.csv
β”œβ”€β”€ panderm_ham.csv
.
.
└── clip_d7p.csv

Option 2: Extract Features Yourself

  • Get a Huggingface Token to be able to use Derm and CLIP models.

  • Rename .env.example to .env and put your token as HF_TOKEN=<your-token>

  • Download the three datasets from the links above and place each in its corresponding directory.
    Ensure the directory structure matches the following:

data/
β”œβ”€β”€ ham/
β”‚   └── HAM10000_images_part_1/
β”‚   └── HAM10000_images_part_2/
β”‚   . 
β”‚   └── HAM10000_metadata
β”œβ”€β”€ dmf/
β”‚   └── DMF/
β”‚        └── images/
β”‚        └── meta-dmf.csv
└── d7p/
    └── release_v0/
        └── images/
        └── meta/
            └── meta.csv

  • You can extract embeddings for any dataset and model combination using the extract.py script.
Script: extract.py
Parameters
Argument Type Description
--data_name string Dataset name (e.g. ham, d7p, dmf).
--model_name string Name of the model (e.g. derm, panderm, clip).
python extract.py \
    --data_name <data name> \
    --model_name <model name> 

This will automatically extract features and save the output as:

outputs/{model_name}_{data_name}.csv

Run Experiment

Script: run_experiment.py

Parameters
Argument Type Description
--data_name string Dataset name (e.g. ham, d7p, dmf).
--model_name string Name of the model (e.g. derm, panderm, clip).
Usage

From the terminal, run the script with:

python run_experiment.py \
    --data_name <data name> \
    --model_name <model name>

This will reads outputs/{model_name}_{data_name}.csv by default

Example Usage

Experiment on derm-foundation model over dmf dataset, outputs/derm_dmf.csv:

python run_experiment.py \
    --data_name dmf \
    --model_name derm

Citation

@inproceedings{
elkhayat2025foundation,
title={Foundation Models as Class-Incremental Learners for Dermatological Image Classification},
author={Mohamed Elkhayat and Mohamed Mahmoud and Jamil Fayyad and Nourhan Bayasi},
booktitle={MICCAI Student Board EMERGE Workshop: Empowering Medical Information Computing and Research through Early-career Guidance and Expertise},
year={2025},
url={https://openreview.net/forum?id=FyvpNwaMHk}
}

About

Foundation Models as Class-Incremental Learners for Dermatological Image Classification

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy