This Actor is unavailable because the developer has decided to deprecate it. Would you like to try a similar Actor instead?
See alternative ActorsOpen AI Completion
Provides a simple but powerful text-in, text-out interface to any OpenAI models. You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. https://beta.openai.com/docs/guides/completion
Engine
engine
EnumOptional
Select OpenAI engine
Value options:
"davinci": string"curie": string"babbage": string"ada": string"davinci-instruct-beta": string"curie-instruct-beta": string"davinci-codex": string"cushman-codex": string"content-filter-alpha": string
Default value of this property is "davinci"
Temperature
temperature
stringOptional
What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
We generally recommend altering this or top_p but not both.
Default value of this property is "0"
Max. tokens
max_tokens
integerOptional
The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except davinci-codex, which supports 4096).
Default value of this property is 16
Top p
top_p
stringOptional
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Default value of this property is "1"
Frequency penalty
frequency_penalty
stringOptional
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Default value of this property is "0"