Class: OpenAI::Models::Chat::ChatCompletionChunk
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Chat::ChatCompletionChunk
- Defined in:
- lib/openai/models/chat/chat_completion_chunk.rb
Defined Under Namespace
Modules: ServiceTier Classes: Choice
Instance Attribute Summary collapse
-
#choices ⇒ Array<OpenAI::Models::Chat::ChatCompletionChunk::Choice>
A list of chat completion choices.
-
#created ⇒ Integer
The Unix timestamp (in seconds) of when the chat completion was created.
-
#id ⇒ String
A unique identifier for the chat completion.
-
#model ⇒ String
The model to generate the completion.
-
#object ⇒ Symbol, :"chat.completion.chunk"
The object type, which is always ‘chat.completion.chunk`.
-
#service_tier ⇒ Symbol, ...
Specifies the latency tier to use for processing the request.
-
#system_fingerprint ⇒ String?
This fingerprint represents the backend configuration that the model runs with.
-
#usage ⇒ OpenAI::Models::CompletionUsage?
An optional field that will only be present when you set ‘stream_options: true` in your request.
Instance Method Summary collapse
-
#initialize(content: , refusal: ) ⇒ Object
constructor
Log probability information for the choice.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, #inspect, inspect, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(content: , refusal: ) ⇒ Object
Log probability information for the choice.
|
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 365
|
Instance Attribute Details
#choices ⇒ Array<OpenAI::Models::Chat::ChatCompletionChunk::Choice>
A list of chat completion choices. Can contain more than one elements if ‘n` is greater than 1. Can also be empty for the last chunk if you set `stream_options: true`.
19 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 19 required :choices, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionChunk::Choice] } |
#created ⇒ Integer
The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.
26 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 26 required :created, Integer |
#id ⇒ String
A unique identifier for the chat completion. Each chunk has the same ID.
11 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 11 required :id, String |
#model ⇒ String
The model to generate the completion.
32 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 32 required :model, String |
#object ⇒ Symbol, :"chat.completion.chunk"
The object type, which is always ‘chat.completion.chunk`.
38 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 38 required :object, const: :"chat.completion.chunk" |
#service_tier ⇒ Symbol, ...
Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:
-
If set to ‘auto’, and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.
-
If set to ‘auto’, and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
-
If set to ‘default’, the request will be processed using the default service tier with a lower uptime SLA and no latency guarantee.
-
If set to ‘flex’, the request will be processed with the Flex Processing service tier. [Learn more](platform.openai.com/docs/guides/flex-processing).
-
When not set, the default behavior is ‘auto’.
When this parameter is set, the response body will include the ‘service_tier` utilized.
60 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 60 optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletionChunk::ServiceTier }, nil?: true |
#system_fingerprint ⇒ String?
This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the ‘seed` request parameter to understand when backend changes have been made that might impact determinism.
68 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 68 optional :system_fingerprint, String |
#usage ⇒ OpenAI::Models::CompletionUsage?
An optional field that will only be present when you set ‘stream_options: true` in your request. When present, it contains a null value **except for the last chunk** which contains the token usage statistics for the entire request.
NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.
80 |
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 80 optional :usage, -> { OpenAI::CompletionUsage }, nil?: true |