Skip to content

Commit

Permalink
Fix DPOTrainer docstrings (#1298)
Browse files Browse the repository at this point in the history
Some issues were leading the auto-generation of the API reference to fail and the args were overlapped in the documentation page
  • Loading branch information
alvarobartt authored Jan 31, 2024
1 parent 036213b commit 6f40f20
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions trl/trainer/dpo_trainer.py
Original file line number Diff line number Diff line change
Expand Up @@ -123,11 +123,11 @@ class DPOTrainer(Trainer):
precompute_ref_log_probs (`bool`, defaults to `False`):
Flag to precompute reference model log probabilities and evaluation datasets. This is useful if you want to train
without the reference model and reduce the total GPU memory needed.
dataset_num_proc (`Optional[int]`):
dataset_num_proc (`Optional[int]`, *optional*):
The number of workers to use to tokenize the data. Defaults to None.
model_init_kwargs: (`Optional[Dict]`, *optional*):
model_init_kwargs (`Optional[Dict]`, *optional*):
Dict of Optional kwargs to pass when instantiating the model from a string
ref_model_init_kwargs: (`Optional[Dict]`, *optional*):
ref_model_init_kwargs (`Optional[Dict]`, *optional*):
Dict of Optional kwargs to pass when instantiating the ref model from a string
model_adapter_name (`str`, defaults to `None`):
Name of the train target PEFT adapter, when using LoRA with multiple adapters.
Expand Down

0 comments on commit 6f40f20

Please sign in to comment.