diff --git a/README.md b/README.md index 1fc79b6..69db6ce 100755 --- a/README.md +++ b/README.md @@ -11,6 +11,7 @@ It supports seven models, four features (video and audio features), and six data Furthermore, Lighthouse supports [audio moment retrieval](https://h-munakata.github.io/Language-based-Audio-Moment-Retrieval/), a task to identify relevant moments from an audio input based on a given text query. ## News +- [2024/10/22] version 1.0 has been released. - [2024/10/6] Our paper has been accepted at EMNLP2024, system demonstration track. - [2024/09/25] Our work ["Language-based audio moment retrieval"](https://arxiv.org/abs/2409.15672) has been released. Lighthouse supports AMR. - [2024/08/22] Our demo paper is available on arXiv. Any comments are welcome: [Lighthouse: A User-Friendly Library for Reproducible Video Moment Retrieval and Highlight Detection](https://www.arxiv.org/abs/2408.02901). @@ -69,6 +70,7 @@ Run `python api_example/demo.py` to reproduce the results. It automatically down If you want to use other models, download [pre-trained weights](https://drive.google.com/file/d/1jxs_bvwttXTF9Lk3aKLohkqfYOonLyrO/view?usp=sharing). When using `clip_slowfast` features, it is necessary to download [slowfast pre-trained weights](https://dl.fbaipublicfiles.com/pyslowfast/model_zoo/kinetics400/SLOWFAST_8x8_R50.pkl). When using `clip_slowfast_pann` features, in addition to the slowfast weight, download [panns weights](https://zenodo.org/record/3987831/files/Cnn14_mAP%3D0.431.pth). +Run `python api_example/amr_demo.py` to reproduce the AMR results. **Limitation**: The maximum video duration is **150s** due to the current benchmark datasets. For CPU users, set `feature_name='clip'` because CLIP+Slowfast or CLIP+Slowfast+PANNs features are very slow without GPUs.