Sampling temperature, taking a value between 0 and 2. The larger the value, the more random the output; the smaller the value, the more concentrated and certain the output.
0.7
top_p
number
No
Another way to adjust the sampling distribution, taking a value between 0 and 1. It is usually set as an alternative to the temperature.
0.9
n
number
No
How many replies to generate for each input message.
1
stream
boolean
No
Whether to enable streaming output. When set to true, returns streaming data similar to ChatGPT.
false
stop
string
No
Up to 4 strings can be specified. Once one of these strings appears in the generated content, it stops generating more tokens.
"\n"
max_tokens
number
No
The maximum number of tokens that can be generated in a single reply, subject to the model context length limit.
1024
presence_penalty
number
No
-2.0 ~ 2.0. A positive value encourages the model to output more new topics, while a negative value reduces the probability of outputting new topics.
0
frequency_penalty
number
No
-2.0 ~ 2.0. A positive value reduces the frequency of repeated phrases in the model, while a negative value increases the probability of repeated phrases.