Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue of bert_model arg in run_classify.py #45

Closed
llidev opened this issue Nov 20, 2018 · 1 comment
Closed

Issue of bert_model arg in run_classify.py #45

llidev opened this issue Nov 20, 2018 · 1 comment

Comments

@llidev
Copy link
Contributor

llidev commented Nov 20, 2018

Hi,

I am trying to understand the bert_model arg in run_classify.py. In the file, I can see

tokenizer = BertTokenizer.from_pretrained(args.bert_model)

where bert_model is expected to be the vocab text file of the model

However, I also see

model = BertForSequenceClassification.from_pretrained(args.bert_model, len(label_list))

where bert_model is expected to be a archive file containing the model checkpoint and config.

Please help to advice the correct use of bert_model if I have my pretrained model converted locally already.

Thanks!

@thomwolf
Copy link
Member

Hi, please read this section of the readme.

xloem pushed a commit to xloem/transformers that referenced this issue Apr 9, 2023
* Update trainer and model flows to accommodate sparseml

Disable FP16 on QAT start (huggingface#12)

* Override LRScheduler when using LRModifiers

* Disable FP16 on QAT start

* keep wrapped scaler object for training after disabling

Using QATMatMul in DistilBERT model class (huggingface#41)

Removed double quantization of output of context layer. (huggingface#45)

Fix DataParallel validation forward signatures (huggingface#47)

* Fix: DataParallel validation forward signatures

* Update: generalize forward_fn selection

Best model after epoch (huggingface#46)

fix sclaer check for non fp16 mode in trainer (huggingface#38)

Mobilebert QAT (huggingface#55)

* Remove duplicate quantization of vocabulary.

enable a QATWrapper for non-parameterized matmuls in BERT self attention (huggingface#9)

* Utils and auxillary changes

update Zoo stub loading for SparseZoo 1.1 refactor (huggingface#54)

add flag to signal NM integration is active (huggingface#32)

Add recipe_name to file names

* Fix errors introduced in manual cherry-pick upgrade

Co-authored-by: Benjamin Fineran <bfineran@users.noreply.github.com>
jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
jonb377 pushed a commit to jonb377/hf-transformers that referenced this issue Nov 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants