This is the official PyTorch implementation for the CVPR 2025 paper: NLPrompt: Noise-Label Prompt Learning for Vision-Language Models.
Make sure conda is installed properly.
# Clone this repo
git clone https://github.com/qunovo/NLPrompt.git
# Create a conda environment
conda create -y -n nlprompt python=3.8
# Activate the environment
conda activate nlprompt
# Install torch and torchvision
# Please refer to https://pytorch.org/ if you need a different cuda version
conda install pytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.1 -c pytorch -c nvidia
# Install dependencies
cd Dassl.pytorch/
pip install -r requirements.txt
# Install this library (no need to re-build if the source code is modified)
python setup.py develop
cd ..
Follow the instructions in DATASETS.md to prepare the datasets.
Note that the Food101N dataset needs to be downloaded separately. Food101N uses the same test set as Food101.
This code is built on top of the CoOp and Dassl. We sincerely appreciate their contributions!
We provide the running scripts in scripts/nlprompt
.
You need to make sure the data path is correct before you run it.
Below we provide examples on how to run NLPrompt on the Caltech101 dataset:
NLPrompt (Caltech101, Sym):
bash scripts/nlprompt/main.sh caltech101 16 0.50 'sym' 100
NLPrompt (Caltech101, Asym):
bash scripts/nlprompt/main.sh caltech101 16 0.50 'asym' 100
where the first parameter is the name of the dataset, the second parameter is the number of shots, the third parameter is the noise rate, the fourth parameter is the type of noise and the last parameter is the number of categories of the dataset.
After the experiments, all the results are saved to output/
.
If you find our work useful in your research, please consider citing it!