Class: OpenAI::Models::Chat::ChatCompletion

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/chat/chat_completion.rb

Overview

Defined Under Namespace

Modules: ServiceTier Classes: Choice

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, #inspect, inspect, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(content: , refusal: ) ⇒ Object

Log probability information for the choice.

Parameters:



# File 'lib/openai/models/chat/chat_completion.rb', line 182

Instance Attribute Details

#choicesArray<OpenAI::Models::Chat::ChatCompletion::Choice>

A list of chat completion choices. Can be more than one if ‘n` is greater than 1.



21
# File 'lib/openai/models/chat/chat_completion.rb', line 21

required :choices, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletion::Choice] }

#createdInteger

The Unix timestamp (in seconds) of when the chat completion was created.

Returns:

  • (Integer)


27
# File 'lib/openai/models/chat/chat_completion.rb', line 27

required :created, Integer

#idString

A unique identifier for the chat completion.

Returns:

  • (String)


14
# File 'lib/openai/models/chat/chat_completion.rb', line 14

required :id, String

#modelString

The model used for the chat completion.

Returns:

  • (String)


33
# File 'lib/openai/models/chat/chat_completion.rb', line 33

required :model, String

#objectSymbol, :"chat.completion"

The object type, which is always ‘chat.completion`.

Returns:

  • (Symbol, :"chat.completion")


39
# File 'lib/openai/models/chat/chat_completion.rb', line 39

required :object, const: :"chat.completion"

#service_tierSymbol, ...

Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:

  • If set to ‘auto’, and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.

  • If set to ‘auto’, and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.

  • If set to ‘default’, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.

  • If set to ‘flex’, the request will be processed with the Flex Processing service tier. [Learn more](platform.openai.com/docs/guides/flex-processing).

  • When not set, the default behavior is ‘auto’.

When this parameter is set, the response body will include the ‘service_tier` utilized.



61
# File 'lib/openai/models/chat/chat_completion.rb', line 61

optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletion::ServiceTier }, nil?: true

#system_fingerprintString?

This fingerprint represents the backend configuration that the model runs with.

Can be used in conjunction with the ‘seed` request parameter to understand when backend changes have been made that might impact determinism.

Returns:

  • (String, nil)


70
# File 'lib/openai/models/chat/chat_completion.rb', line 70

optional :system_fingerprint, String

#usageOpenAI::Models::CompletionUsage?

Usage statistics for the completion request.



76
# File 'lib/openai/models/chat/chat_completion.rb', line 76

optional :usage, -> { OpenAI::CompletionUsage }