From cd5e83fb09ede13700f7c437b69f5e9b87cb341c Mon Sep 17 00:00:00 2001 From: Saurav Maheshkar Date: Thu, 19 Sep 2024 15:59:38 +0100 Subject: [PATCH] fix: drop broken link --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 7b4827a..b57b3ed 100644 --- a/README.md +++ b/README.md @@ -40,7 +40,7 @@ Check out [Lightly**SSL**](https://github.com/lightly-ai/lightly) a computer vis | [Faster Segment Anything: Towards Lightweight SAM for Mobile Applications](https://arxiv.org/abs/2306.14289) | [![arXiv](https://img.shields.io/badge/arXiv-2306.14289-b31b1b.svg)](https://arxiv.org/abs/2306.14289) [![GitHub](https://img.shields.io/badge/GitHub-100000?&logo=github&logoColor=white)](https://github.com/ChaoningZhang/MobileSAM) | | [What Do Self-Supervised Vision Transformers Learn?](https://arxiv.org/abs/2305.00729) | [![arXiv](https://img.shields.io/badge/ICLR_2023-2305.00729-b31b1b.svg)](https://arxiv.org/abs/2305.00729) [![GitHub](https://img.shields.io/badge/GitHub-100000?&logo=github&logoColor=white)](https://github.com/naver-ai/cl-vs-mim) | | [Improved baselines for vision-language pre-training](https://arxiv.org/abs/2305.08675) | [![arXiv](https://img.shields.io/badge/arXiv-2305.08675-b31b1b.svg)](https://arxiv.org/abs/2305.08675) [![GitHub](https://img.shields.io/badge/GitHub-100000?&logo=github&logoColor=white)](https://github.com/facebookresearch/clip-rocket) | -| [Active Self-Supervised Learning: A Few Low-Cost Relationships Are All You Need](https://arxiv.org/abs/2303.15256) | [![arXiv](https://img.shields.io/badge/arXiv-2303.15256-b31b1b.svg)](https://arxiv.org/abs/2303.15256) [![GitHub](https://img.shields.io/badge/GitHub-100000?&logo=github&logoColor=white)](https://github.com/VivienCabannes/rates) | +| [Active Self-Supervised Learning: A Few Low-Cost Relationships Are All You Need](https://arxiv.org/abs/2303.15256) | [![arXiv](https://img.shields.io/badge/arXiv-2303.15256-b31b1b.svg)](https://arxiv.org/abs/2303.15256) | | [EfficientSAM: Leveraged Masked Image Pretraining for Efficient Segment Anything](https://arxiv.org/abs/2312.00863) | [![arXiv](https://img.shields.io/badge/arXiv-2312.00863-b31b1b.svg)](https://arxiv.org/abs/2312.00863) [![GitHub](https://img.shields.io/badge/GitHub-100000?&logo=github&logoColor=white)](https://github.com/yformer/EfficientSAM) | | [DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions](https://arxiv.org/abs/2309.03576) | [![arXiv](https://img.shields.io/badge/arXiv-2309.03576-b31b1b.svg)](https://arxiv.org/abs/2309.03576) [![GitHub](https://img.shields.io/badge/GitHub-100000?&logo=github&logoColor=white)](https://github.com/Haochen-Wang409/DropPos) | | [VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_VideoMAE_V2_Scaling_Video_Masked_Autoencoders_With_Dual_Masking_CVPR_2023_paper.pdf) | [![CVPR](https://img.shields.io/badge/CVPR-2023-b31b1b.svg)](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_VideoMAE_V2_Scaling_Video_Masked_Autoencoders_With_Dual_Masking_CVPR_2023_paper.pdf) |