Class: OpenAI::Resources::Completions

Inherits:
Object
  • Object
show all
Defined in:
lib/openai/resources/completions.rb

Instance Method Summary collapse

Constructor Details

#initialize(client:) ⇒ Completions

This method is part of a private API. You should avoid using this method if possible, as it may be removed or be changed in the future.

Returns a new instance of Completions.

Parameters:



138
139
140
# File 'lib/openai/resources/completions.rb', line 138

def initialize(client:)
  @client = client
end

Instance Method Details

#create(model: , prompt: , best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Models::Completion

See #create_streaming for streaming counterpart.

Some parameter documentations has been truncated, see Models::CompletionCreateParams for more details.

Creates a completion for the provided prompt and parameters.

Parameters:

  • model (String, Symbol, OpenAI::Models::CompletionCreateParams::Model)

    ID of the model to use. You can use the [List models](platform.openai.co

  • prompt (String, Array<String>, Array<Integer>, Array<Array<Integer>>, nil)

    The prompt(s) to generate completions for, encoded as a string, array of strings

  • best_of (Integer, nil)

    Generates ‘best_of` completions server-side and returns the “best” (the one with

  • echo (Boolean, nil)

    Echo back the prompt in addition to the completion

  • frequency_penalty (Float, nil)

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their

  • logit_bias (Hash{Symbol=>Integer}, nil)

    Modify the likelihood of specified tokens appearing in the completion.

  • logprobs (Integer, nil)

    Include the log probabilities on the ‘logprobs` most likely output tokens, as we

  • max_tokens (Integer, nil)

    The maximum number of [tokens](/tokenizer) that can be generated in the completi

  • n (Integer, nil)

    How many completions to generate for each prompt.

  • presence_penalty (Float, nil)

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whethe

  • seed (Integer, nil)

    If specified, our system will make a best effort to sample deterministically, su

  • stop (String, Array<String>, nil)

    Not supported with latest reasoning models ‘o3` and `o4-mini`.

  • stream_options (OpenAI::Models::Chat::ChatCompletionStreamOptions, nil)

    Options for streaming response. Only set this when you set ‘stream: true`.

  • suffix (String, nil)

    The suffix that comes after a completion of inserted text.

  • temperature (Float, nil)

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m

  • top_p (Float, nil)

    An alternative to sampling with temperature, called nucleus sampling, where the

  • user (String)

    A unique identifier representing your end-user, which can help OpenAI to monitor

  • request_options (OpenAI::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

See Also:



54
55
56
57
58
59
60
61
62
63
64
65
66
67
# File 'lib/openai/resources/completions.rb', line 54

def create(params)
  parsed, options = OpenAI::CompletionCreateParams.dump_request(params)
  if parsed[:stream]
    message = "Please use `#create_streaming` for the streaming use case."
    raise ArgumentError.new(message)
  end
  @client.request(
    method: :post,
    path: "completions",
    body: parsed,
    model: OpenAI::Completion,
    options: options
  )
end

#create_streaming(model: , prompt: , best_of: nil, echo: nil, frequency_penalty: nil, logit_bias: nil, logprobs: nil, max_tokens: nil, n: nil, presence_penalty: nil, seed: nil, stop: nil, stream_options: nil, suffix: nil, temperature: nil, top_p: nil, user: nil, request_options: {}) ⇒ OpenAI::Internal::Stream<OpenAI::Models::Completion>

See #create for non-streaming counterpart.

Some parameter documentations has been truncated, see Models::CompletionCreateParams for more details.

Creates a completion for the provided prompt and parameters.

Parameters:

  • model (String, Symbol, OpenAI::Models::CompletionCreateParams::Model)

    ID of the model to use. You can use the [List models](platform.openai.co

  • prompt (String, Array<String>, Array<Integer>, Array<Array<Integer>>, nil)

    The prompt(s) to generate completions for, encoded as a string, array of strings

  • best_of (Integer, nil)

    Generates ‘best_of` completions server-side and returns the “best” (the one with

  • echo (Boolean, nil)

    Echo back the prompt in addition to the completion

  • frequency_penalty (Float, nil)

    Number between -2.0 and 2.0. Positive values penalize new tokens based on their

  • logit_bias (Hash{Symbol=>Integer}, nil)

    Modify the likelihood of specified tokens appearing in the completion.

  • logprobs (Integer, nil)

    Include the log probabilities on the ‘logprobs` most likely output tokens, as we

  • max_tokens (Integer, nil)

    The maximum number of [tokens](/tokenizer) that can be generated in the completi

  • n (Integer, nil)

    How many completions to generate for each prompt.

  • presence_penalty (Float, nil)

    Number between -2.0 and 2.0. Positive values penalize new tokens based on whethe

  • seed (Integer, nil)

    If specified, our system will make a best effort to sample deterministically, su

  • stop (String, Array<String>, nil)

    Not supported with latest reasoning models ‘o3` and `o4-mini`.

  • stream_options (OpenAI::Models::Chat::ChatCompletionStreamOptions, nil)

    Options for streaming response. Only set this when you set ‘stream: true`.

  • suffix (String, nil)

    The suffix that comes after a completion of inserted text.

  • temperature (Float, nil)

    What sampling temperature to use, between 0 and 2. Higher values like 0.8 will m

  • top_p (Float, nil)

    An alternative to sampling with temperature, called nucleus sampling, where the

  • user (String)

    A unique identifier representing your end-user, which can help OpenAI to monitor

  • request_options (OpenAI::RequestOptions, Hash{Symbol=>Object}, nil)

Returns:

See Also:



117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
# File 'lib/openai/resources/completions.rb', line 117

def create_streaming(params)
  parsed, options = OpenAI::CompletionCreateParams.dump_request(params)
  unless parsed.fetch(:stream, true)
    message = "Please use `#create` for the non-streaming use case."
    raise ArgumentError.new(message)
  end
  parsed.store(:stream, true)
  @client.request(
    method: :post,
    path: "completions",
    headers: {"accept" => "text/event-stream"},
    body: parsed,
    stream: OpenAI::Internal::Stream,
    model: OpenAI::Completion,
    options: options
  )
end