Class: OpenAI::Resources::Completions
- Inherits:
-
Object
- Object
- OpenAI::Resources::Completions
- Defined in:
- lib/openai/resources/completions.rb
Overview
Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
Instance Method Summary collapse
-
#create(model:, prompt:, best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Models::Completion
See #create_streaming for streaming counterpart.
-
#create_streaming(model:, prompt:, best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Completion>
See #create for non-streaming counterpart.
-
#initialize(client:) ⇒ Completions
constructor
private
A new instance of Completions.
Constructor Details
#initialize(client:) ⇒ Completions
This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.
Returns a new instance of Completions.
146 147 148 |
# File 'lib/openai/resources/completions.rb', line 146 def initialize(client:) @client = client end |
Instance Method Details
#create(model:, prompt:, best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Models::Completion
See #create_streaming for streaming counterpart.
Some parameter documentations has been truncated, see Models::CompletionCreateParams for more details.
Creates a completion for the provided prompt and parameters.
Returns a completion object, or a sequence of completion objects if the request is streamed.
59 60 61 62 63 64 65 66 67 68 69 70 71 72 |
# File 'lib/openai/resources/completions.rb', line 59 def create(params) parsed, = OpenAI::CompletionCreateParams.dump_request(params) if parsed[:stream] = "Please use `#create_streaming` for the streaming use case." raise ArgumentError.new() end @client.request( method: :post, path: "completions", body: parsed, model: OpenAI::Completion, options: ) end |
#create_streaming(model:, prompt:, best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Completion>
See #create for non-streaming counterpart.
Some parameter documentations has been truncated, see Models::CompletionCreateParams for more details.
Creates a completion for the provided prompt and parameters.
Returns a completion object, or a sequence of completion objects if the request is streamed.
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 |
# File 'lib/openai/resources/completions.rb', line 125 def create_streaming(params) parsed, = OpenAI::CompletionCreateParams.dump_request(params) unless parsed.fetch(:stream, true) = "Please use `#create` for the non-streaming use case." raise ArgumentError.new() end parsed.store(:stream, true) @client.request( method: :post, path: "completions", headers: {"accept" => "text/event-stream"}, body: parsed, stream: OpenAI::Internal::Stream, model: OpenAI::Completion, options: ) end |