Skip to content

ACM CHI 2023 paper "Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots"

License

Notifications You must be signed in to change notification settings

johnsonkao0213/kill_time_detection

Repository files navigation

kill_time_detection

  • ACM CHI 2023 paper "Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots"

This repository provides the code for the kill time detection described in the paper. Our training weights that reported in the paper can be downloaded from the Releases. However, due to IRB policy, we cannot provide the training dataset. You will need to collect your own screenshots and sensor logs for training models. We have provided a few example screenshots and the sensor logs generated by ourself for inference:nerd_face:.

Table Of Contents

Requirements

We trained our models on 8 Tesla V100 GPUs.

  • PyTorch (Container Version 21.06 is recommended)

Our framework can be compiled on Python 3 environments. The modules used in our code can be installed using:

$ pip install -r requirements.txt

How to use

  • First, store your own screenshots in the datasets/screenshots folder and the corresponding sensor data in a csv file in datasets/csv. See In Details for more information on the folders. We have provided few example data for you to run through our code.

  • Second, run create_frame.py to generate folders storing the 30-second window of screenshots as one data point.

$ python create_frame.py
  • Third, run create_dataset.py to generate train, val, and test datasets.
$ python create_dataset.py
  • Fourth, run H5.py to generate the H5 file, which speeds up the training process with large amounts of screenshot data.
# H5.py 
# You would like to change the file path.
dataset_X_path = "./datasets/trainSet_X_fold1.npy"
dataset_Y_path = "./datasets/trainSet_Y_fold1.npy"
h5_path = "./datasets/trainSet_fold1.h5"
$ python H5.py
  • Fifth, to train the sensor model, run mlp_varlen.py. To train the screenshot model, run cnn_varlen.py. In both file, you you will need to modify the file path for the H5 file and csv file.
# mlp_varlen.py

csv_path =  './datasets/csv/encoded_test_data.csv' # your sensor data path 
h5_train_path = './datasets/trainSet_fold1.h5' # your file path 
h5_val_path = './datasets/valSet_fold1.h5' # your file path 
user = ['U00'] # total users
# cnn_varlen.py

h5_train_path = './datasets/trainSet_fold1.h5' # your H5 file path 
h5_val_path = './datasets/valSet_fold1.h5' # your H5 file path 
$ python mlp_varlen.py

$ python cnn_varlen.py
  • Sixth, to train the fusion model, run fusion_varlen.py. In the fusion_varlen.py, you will need to assign paths of the dataset and weights that you trained in mlp_varlen.py and cnn_varlen.py.
# fusion_varlen.py

h5_train_path = './datasets/trainSet_fold'+fold+'.h5' # your H5 file path 
h5_val_path = './datasets/valSet_fold'+fold+'.h5'  # your H5 file path 
cnn_encoder_path = 'cnn_encoder_CRNN1_epoch1.pth' # your screenshot model's encoder weight
cnn_decoder_path = 'rnn_decoder_CRNN1_epoch1.pth' # your screenshot model's decoder weight
mlp_encoder_path = 'rnn_decoder_mlp1_epoch1.pth' # your sensor model's encoder weight
$ python fusion_varlen.py
  • Lastly, for inference, run predictionCNN.py, predictionFusion.py, and predictionMLP.py. Remember to change the weights and the test set path.
# predictionMLP.py

checkpoint_path = './weight/rnn_decoder_mlp1_epoch1.pth' # your sensor model's encoder weight
h5_test_path = './datasets/testSet_fold1.h5' # your H5 file path 
# predictionCNN.py

cnn_encoder_path = 'cnn_encoder_CRNN1_epoch1.pth' # your screenshot model's encoder weight
cnn_decoder_path = 'rnn_decoder_CRNN1_epoch1.pth' # your screenshot model's decoder weight
h5_test_path = './datasets/testSet_fold1.h5' # your H5 file path 
# predictionFusion.py

checkpoint_path = './weight/rnn_decoder_fusion1_epoch1.pth' # your fusion model's encoder weight
h5_test_path = './datasets/testSet_fold1.h5' # your H5 file path 
$ python predictionMLP.py

$ python predictionCNN.py

$ python predictionFusion.py

In Details

├──  weight
│    └── rnn_decoder_cnn1.pth  - download the weight from Rleases.
│    └── rnn_decoder_fusion1.pth - download the weight from Rleases.
│    └── rnn_decoder_mlp1.pth - download the weight from Rleases.
│    
│    
├── utils
│    ├── deepfm.py
│    └── functions.py
│    
│    
│ 
├──  datasets  - here's the datasets folder that is responsible for all data handling.
│    ├── screenshots
│    │        └── UID 
│    │              ├── 1 - all the screenshots labeled as killing-time
│    │              └── 2 - all the screenshots labeled as non-killing-time
│    └── csv  
│        └── .csv
│
├── prediction
│    └── result.csv - the prediction result from the model.
│
├── H5.py
│
├── create_dataset.py
│
├── create_frame.py
│
├── cnn_varlen.py
│   
├── mlp_varlen.py
│
├── fusion_varlen.py          
│  
├── predictionCNN.py
│ 
├── predictionFusion.py
│ 
└── predictionMLP.py				

Contributing

Acknowledgments

About

ACM CHI 2023 paper "Are You Killing Time? Predicting Smartphone Users’ Time-killing Moments via Fusion of Smartphone Sensor Data and Screenshots"

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 3

  •  
  •  
  •