Class: OpenAI::Models::Chat::ChatCompletion
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Chat::ChatCompletion
- Defined in:
- lib/openai/models/chat/chat_completion.rb
Overview
Defined Under Namespace
Modules: ServiceTier Classes: Choice
Instance Attribute Summary collapse
-
#choices ⇒ Array<OpenAI::Models::Chat::ChatCompletion::Choice>
A list of chat completion choices.
-
#created ⇒ Integer
The Unix timestamp (in seconds) of when the chat completion was created.
-
#id ⇒ String
A unique identifier for the chat completion.
-
#model ⇒ String
The model used for the chat completion.
-
#object ⇒ Symbol, :"chat.completion"
The object type, which is always ‘chat.completion`.
-
#service_tier ⇒ Symbol, ...
Specifies the latency tier to use for processing the request.
-
#system_fingerprint ⇒ String?
This fingerprint represents the backend configuration that the model runs with.
-
#usage ⇒ OpenAI::Models::CompletionUsage?
Usage statistics for the completion request.
Instance Method Summary collapse
-
#initialize(content: , refusal: ) ⇒ Object
constructor
Log probability information for the choice.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, #inspect, inspect, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(content: , refusal: ) ⇒ Object
Log probability information for the choice.
|
# File 'lib/openai/models/chat/chat_completion.rb', line 182
|
Instance Attribute Details
#choices ⇒ Array<OpenAI::Models::Chat::ChatCompletion::Choice>
A list of chat completion choices. Can be more than one if ‘n` is greater than 1.
21 |
# File 'lib/openai/models/chat/chat_completion.rb', line 21 required :choices, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletion::Choice] } |
#created ⇒ Integer
The Unix timestamp (in seconds) of when the chat completion was created.
27 |
# File 'lib/openai/models/chat/chat_completion.rb', line 27 required :created, Integer |
#id ⇒ String
A unique identifier for the chat completion.
14 |
# File 'lib/openai/models/chat/chat_completion.rb', line 14 required :id, String |
#model ⇒ String
The model used for the chat completion.
33 |
# File 'lib/openai/models/chat/chat_completion.rb', line 33 required :model, String |
#object ⇒ Symbol, :"chat.completion"
The object type, which is always ‘chat.completion`.
39 |
# File 'lib/openai/models/chat/chat_completion.rb', line 39 required :object, const: :"chat.completion" |
#service_tier ⇒ Symbol, ...
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
-
If set to ‘auto’, and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
-
If set to ‘auto’, and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
-
If set to ‘default’, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
-
If set to ‘flex’, the request will be processed with the Flex Processing service tier. [Learn more](platform.openai.com/docs/guides/flex-processing).
-
When not set, the default behavior is ‘auto’.
When this parameter is set, the response body will include the ‘service_tier` utilized.
61 |
# File 'lib/openai/models/chat/chat_completion.rb', line 61 optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletion::ServiceTier }, nil?: true |
#system_fingerprint ⇒ String?
This fingerprint represents the backend configuration that the model runs with.
Can be used in conjunction with the ‘seed` request parameter to understand when backend changes have been made that might impact determinism.
70 |
# File 'lib/openai/models/chat/chat_completion.rb', line 70 optional :system_fingerprint, String |
#usage ⇒ OpenAI::Models::CompletionUsage?
Usage statistics for the completion request.
76 |
# File 'lib/openai/models/chat/chat_completion.rb', line 76 optional :usage, -> { OpenAI::CompletionUsage } |