There are two modes of inference as described in our paper.
Greedy Search selects the most likely question decomposition at each step rather than considering multiple decomposition strategies. This is much faster than beam search but can not recover from any failures, e.g. if the most likely decomposition at a given step asks a question to the textqa agent but it can not answer it.
To run inference using greedy search, run
model_path=[PATH TO DECOMPOSER MODEL] \
remodel_path=[PATH TO DATASET FOLDER IN COMMAQA FORMAT] \
filename=[FILENAME, e.g., train/dev/test.json] \
python commaqa/inference/configurable_inference.py \
--input [FILE IN DROP FORMAT] \
--config configs/inference/commaqav1_greedy_search.jsonnet \
--reader drop \
--output predictions.json
Since our dataset (and other tasks in general) don't always have a pre-determined strategy to answer a question, we may need to consider multiple question decompositions at each step and then select the ones that do succeed. We use beam search to consider multiple decompositions at each step. To run inference in this mode, use:
model_path=[PATH TO DECOMPOSER MODEL] \
remodel_path=[PATH TO DATASET FOLDER IN COMMAQA FORMAT] \
filename=[FILENAME, e.g., train/dev/test.json] \
python commaqa/inference/configurable_inference.py \
--input [FILE IN DROP FORMAT] \
--config configs/inference/commaqav1_beam_search.jsonnet \
--reader drop \
--output predictions.json
For example, to run inference on CommaQA-E using the provided datasets and models,
- Unzip the dataset
commaqa_explicit.zip
intocommaqa_explicit
- Unzip the model
commaqa_e_oracle_model.zip
intocommaqa_explicit_oracle_model
- Call inference:
model_path=commaqa_explicit_oracle_model/ \
remodel_path=commaqa_explicit/commaqa/ \
filename=test.json \
python commaqa/inference/configurable_inference.py \
--input commaqa_explicit/drop/${filename} \
--config configs/inference/commaqav1_beam_search.jsonnet \
--reader drop \
--output predictions.json
You can change the dataset and model paths to run inference on a different split. You can run greedy
inference by changing the config file to commaqav1_greedy_search.jsonnet
.