model |
kotlin.String |
ID of the model to use for completion. You can select one of `ada`, `babbage`, `curie`, or `davinci`. |
|
question |
kotlin.String |
Question to get answered. |
|
examples |
kotlin.collections.List<kotlin.collections.List<kotlin.String>> |
List of (question, answer) pairs that will help steer the model towards the tone and answer format you'd like. We recommend adding 2 to 3 examples. |
|
examplesContext |
kotlin.String |
A text snippet containing the contextual information used to generate the answers for the `examples` you provide. |
|
documents |
kotlin.collections.List<kotlin.String> |
List of documents from which the answer for the input `question` should be derived. If this is an empty list, the question will be answered based on the question-answer examples. You should specify either `documents` or a `file`, but not both. |
[optional] |
file |
kotlin.String |
The ID of an uploaded file that contains documents to search over. See upload file for how to upload a file of the desired format and purpose. You should specify either `documents` or a `file`, but not both. |
[optional] |
searchModel |
kotlin.String |
ID of the model to use for Search. You can select one of `ada`, `babbage`, `curie`, or `davinci`. |
[optional] |
maxRerank |
kotlin.Int |
The maximum number of documents to be ranked by Search when using `file`. Setting it to a higher value leads to improved accuracy but with increased latency and cost. |
[optional] |
temperature |
java.math.BigDecimal |
What sampling temperature to use. Higher values mean the model will take more risks and value 0 (argmax sampling) works better for scenarios with a well-defined answer. |
[optional] |
logprobs |
kotlin.Int |
Include the log probabilities on the `logprobs` most likely tokens, as well the chosen tokens. For example, if `logprobs` is 5, the API will return a list of the 5 most likely tokens. The API will always return the `logprob` of the sampled token, so there may be up to `logprobs+1` elements in the response. The maximum value for `logprobs` is 5. If you need more than this, please contact us through our Help center and describe your use case. When `logprobs` is set, `completion` will be automatically added into `expand` to get the logprobs. |
[optional] |
maxTokens |
kotlin.Int |
The maximum number of tokens allowed for the generated answer |
[optional] |
stop |
CreateAnswerRequestStop |
|
[optional] |
n |
kotlin.Int |
How many answers to generate for each question. |
[optional] |
logitBias |
kotlin.Any |
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass `{"50256": -100}` to prevent the < |
endoftext |
returnMetadata |
kotlin.Boolean |
A special boolean flag for showing metadata. If set to `true`, each document entry in the returned JSON will contain a "metadata" field. This flag only takes effect when `file` is set. |
[optional] |
returnPrompt |
kotlin.Boolean |
If set to `true`, the returned JSON will include a "prompt" field containing the final prompt that was used to request a completion. This is mainly useful for debugging purposes. |
[optional] |
expand |
kotlin.collections.List<kotlin.Any> |
If an object name is in the list, we provide the full information of the object; otherwise, we only provide the object ID. Currently we support `completion` and `file` objects for expansion. |
[optional] |
user |
kotlin.String |
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. Learn more. |
[optional] |