Foundation Models as Class-Incremental Learners for Dermatological Image Classification
[Arxiv Paper
]
[Cite
]
Class-Incremental Learning (CIL) aims to learn new classes over time without forgetting previously acquired knowledge. The emergence of foundation models (FM) pretrained on large datasets presents new opportunities for CIL by offering rich, transferable representations. However, their potential for enabling incremental learning in dermatology remains largely unexplored. In this paper, we systematically evaluate frozen FMs pretrained on large-scale skin lesion datasets for CIL in dermatological disease classification. We propose a simple yet effective approach where the backbone remains frozen, and a lightweight MLP is trained incrementally for each task. This setup achieves state-of-the-art performance without forgetting, outperforming regularization, replay, and architecture-based methods. To further explore the capabilities of frozen FMs, we examine zero-training scenarios using nearest mean classifiers with prototypes derived from their embeddings. Through extensive ablation studies, we demonstrate that this prototype-based variant can also achieve competitive results. Our findings highlight the strength of frozen FMs for continual learning in dermatology and support their broader adoption in real-world medical applications.
Our experiments are conducted on three publicly available dermatology datasets. Each dataset is partitioned into tasks with mutually exclusive class labels.
Dataset | Source | Description |
---|---|---|
HAM10000 (HAM) | Download | Dermoscopic images of 7 pigmented lesion classes. |
Dermofit (DMF) | Download | High-quality skin lesion images collected under standardised conditions with internal colour standards. |
Derm7pt (D7P) | Download | Dermoscopic dataset designed to follow the 7-point skin lesion malignancy checklist. |
All models are used as frozen feature extractors without further fine-tuning. Extracted embeddings are later used to train lightweight classifiers incrementally.
Model | Source / Description |
---|---|
Derm | Google Derm Foundation Model, trained on over 400 skin conditions. |
PanDerm | PanDerm, pretrained on millions of clinical and dermoscopic dermatology images. |
CLIP | CLIP ViT-L/14, pretrained on large-scale image-text pairs. |
For macOS/Linux:
python3 -m venv venv
source venv/bin/activate
For Windows:
python -m venv venv
venv\Scripts\activate
After activating the virtual environment, run:
pip install -r requirements.txt
You have two options for obtaining the image embeddings needed for evaluation:
-
Download all precomputed embeddings (
.csv
files) for all datasets and models directly from this link: Download Embeddings -
Once downloaded, place the files inside the
outputs/
directory and skip to the Run Experiment section.
outputs/
βββ derm_ham.csv
βββ panderm_ham.csv
.
.
βββ clip_d7p.csv
-
Get a Huggingface Token to be able to use Derm and CLIP models.
-
Rename
.env.example
to.env
and put your token asHF_TOKEN=<your-token>
-
Download the three datasets from the links above and place each in its corresponding directory.
Ensure the directory structure matches the following:
data/
βββ ham/
β βββ HAM10000_images_part_1/
β βββ HAM10000_images_part_2/
β .
β βββ HAM10000_metadata
βββ dmf/
β βββ DMF/
β βββ images/
β βββ meta-dmf.csv
βββ d7p/
βββ release_v0/
βββ images/
βββ meta/
βββ meta.csv
- You can extract embeddings for any dataset and model combination using the
extract.py
script.
Argument | Type | Description |
---|---|---|
--data_name |
string | Dataset name (e.g. ham , d7p , dmf ). |
--model_name |
string | Name of the model (e.g. derm , panderm , clip ). |
python extract.py \
--data_name <data name> \
--model_name <model name>
This will automatically extract features and save the output as:
outputs/{model_name}_{data_name}.csv
Argument | Type | Description |
---|---|---|
--data_name |
string | Dataset name (e.g. ham , d7p , dmf ). |
--model_name |
string | Name of the model (e.g. derm , panderm , clip ). |
From the terminal, run the script with:
python run_experiment.py \
--data_name <data name> \
--model_name <model name>
This will reads outputs/{model_name}_{data_name}.csv
by default
Experiment on derm-foundation
model over dmf
dataset, outputs/derm_dmf.csv
:
python run_experiment.py \
--data_name dmf \
--model_name derm
@inproceedings{
elkhayat2025foundation,
title={Foundation Models as Class-Incremental Learners for Dermatological Image Classification},
author={Mohamed Elkhayat and Mohamed Mahmoud and Jamil Fayyad and Nourhan Bayasi},
booktitle={MICCAI Student Board EMERGE Workshop: Empowering Medical Information Computing and Research through Early-career Guidance and Expertise},
year={2025},
url={https://openreview.net/forum?id=FyvpNwaMHk}
}