Skip to content

Commit

Permalink
Merge pull request #367 from promptmetheus/mock-kwargs
Browse files Browse the repository at this point in the history
Add **kwargs to mock_completion
  • Loading branch information
krrishdholakia authored Sep 14, 2023
2 parents da9546b + 630b5d2 commit d98c0a1
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion litellm/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -952,7 +952,7 @@ def chunks(lst, n):
return results

## Use this in your testing pipeline, if you need to mock an LLM response
def mock_completion(model: str, messages: List, stream: bool = False, mock_response: str = "This is a mock request"):
def mock_completion(model: str, messages: List, stream: bool = False, mock_response: str = "This is a mock request", **kwargs):
try:
model_response = ModelResponse()
if stream: # return a generator object, iterate through the text in chunks of 3 char / chunk
Expand Down

0 comments on commit d98c0a1

Please sign in to comment.