Skip to content

The official code for ICLR 2024 paper "Spurious Feature Diversification Improves OOD Generalization"

Notifications You must be signed in to change notification settings

linyongver/SpuriousFeatureDiversification

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation

Introduction

In our ICLR2024 paper Spurious Feature Diversification Improves Out-of-distribution Generalization, we found that learning diverse spurious features actually improves OOD generalizations, which can be effectively applied to modern (large) DNN. We also have a follow-up work on Large Language works on Mitigating the Alignment Tax of RLHF .

Run the codes

Run the following cmd to reproduce the results on MultiColorMNIST in Table 1 (p=0.7)

python submit_main.py --seed 1 --id_sp 1.0 --p 0.7 --n_restarts 20 --colors 32

The arguement --id_sp is the spurious correlation in the training domain. --p is the strength of distributional shift (the spurious correlation in the testing domain is 1-p)

About

The official code for ICLR 2024 paper "Spurious Feature Diversification Improves OOD Generalization"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages