diff --git a/examples/question-answering/README.md b/examples/question-answering/README.md index 3445330c47..654a9e02ad 100755 --- a/examples/question-answering/README.md +++ b/examples/question-answering/README.md @@ -190,14 +190,6 @@ Here is a DeepSpeed configuration you can use to train your models on Gaudi: } ``` - -### Training in torch.compile mode - -Albert XXL model training in [torch.compile](pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) mode is enabled by applying the following changes to your command, \ -a) Set the following environment variables `PT_HPU_LAZY_MODE=0` and `PT_ENABLE_INT64_SUPPORT=1`. \ -b) Run the above commands with `--model_name_or_path albert-xxlarge-v1`, `--use_lazy_mode False` and add `--torch_compile`, `--torch_compile_backend hpu_backend` and remove `--use_hpu_graphs_for_inference` flags. - - ## Fine-tuning Llama on SQuAD1.1 > [!NOTE]