Skip to content

XRF V2: A Dataset for Action Summarization with Wi-Fi Signals, and IMUs in Phones, Watches, Earbuds, and Glasses

License

Notifications You must be signed in to change notification settings

aiotgroup/XRFV2

Repository files navigation

XRFV2

image-20240719171906628

XRF V2: A Dataset for Action Summarization with Wi-Fi Signals, and IMUs in Phones, Watches, Earbuds, and Glasses

XRF V2 is a dataset designed for action summarization tasks using Wi-Fi signals and IMUs data from various devices such as phones, watches, earbuds, and glasses. This dataset provides valuable insights into human activity recognition and summarization using multi-modal sensor data.

📊 Dataset Link:

(Currently, only data from 3 volunteers are available for access)

After downloading the dataset, please note that the xfv2_train folder contains the data used during training, which is split into training and testing datasets, as specified in basic_config.json under the dataset_path parameter. The xrfv2_raw folder contains the raw, unsegmented data, which is intended for testing and evaluation, and corresponds to the dataset_root_path in basic_config.json.

📦 Environment Configuration

🛠️ Mamba Environment Setup:

Ensure that you are using the CUDA 11.8 environment.

# Clone the video-mamba-suite repository
git clone --recursive https://github.com/OpenGVLab/video-mamba-suite.git

# Create and activate the environment
conda create -n video-mamba-suite python=3.9
conda activate video-mamba-suite

# Install PyTorch
pip install torch==2.1.2 torchvision==0.16.2 torchaudio==2.1.2 --index-url https://download.pytorch.org/whl/cu118

# Install required dependencies
pip install h5py pandas scipy torchinfo

# Install the requirements from requirement.txt
pip install -r requirement.txt

# Install causal-conv1d
cd causal-conv1d
# If setup.py fails, run the following:
CAUSAL_CONV1D_FORCE_BUILD=TRUE pip install .
cd ..

# Install mamba
cd mamba
python setup.py develop
cd ..

⚠️ If you encounter issues while installing causal-conv1d, please refer to this setup issue fix.

🏃‍♂️ Running the Code:

  1. Modify the paths in basic_config.json to match your system setup.
  2. To train the model:
   python script/train_run.py
  1. To test the model:

Copy the path of the trained model and specify it in test_run.py before running the test:

   test_model_list = [XXXXX]
   python script/test_run.py

📞 Support

If you encounter any issues or need assistance, feel free to reach out to us.

📝 TODO

  • To process video into 2D pose, 3D pose, and mesh for pose estimation and tracking, mesh reconstruction and tracking.
  • To process video into internvideo6b features for multimodal learning.

License 📜

XRFV2 is licensed under the MIT License. See the LICENSE file for more details.

About

XRF V2: A Dataset for Action Summarization with Wi-Fi Signals, and IMUs in Phones, Watches, Earbuds, and Glasses

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages