Open AI Completion avatar
Open AI Completion
Deprecated
View all Actors
This Actor is deprecated

This Actor is unavailable because the developer has decided to deprecate it. Would you like to try a similar Actor instead?

See alternative Actors
Open AI Completion

Open AI Completion

tkapler/openai-completion-actor

Provides a simple but powerful text-in, text-out interface to any OpenAI models. You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. https://beta.openai.com/docs/guides/completion

API key

api_keystringRequired

Prompt

promptarrayRequired

Enter OpenAI prompt message

Engine

engineEnumOptional

Value options:

"davinci": string"curie": string"babbage": string"ada": string"davinci-instruct-beta": string"curie-instruct-beta": string"davinci-codex": string"cushman-codex": string"content-filter-alpha": string

Default value of this property is "davinci"

Temperature

temperaturestringOptional

What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer.
We generally recommend altering this or top_p but not both.

Default value of this property is "0"

Max. tokens

max_tokensintegerOptional

The maximum number of tokens to generate in the completion.
The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except davinci-codex, which supports 4096).

Default value of this property is 16

Top p

top_pstringOptional

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.

Default value of this property is "1"

Frequency penalty

frequency_penaltystringOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

Default value of this property is "0"

Presence penalty

presence_penaltystringOptional

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.

Default value of this property is "0"

Stop sequence

stoparrayOptional

Define one or more sequences that when generated force GPT-3 to stop.

Default value of this property is ["\n"]

Developer
Maintained by Community