Class: OpenAI::Models::Chat::ChatCompletionChunk

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/chat/chat_completion_chunk.rb

Defined Under Namespace

Modules: ServiceTier Classes: Choice

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, #inspect, inspect, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(content: , refusal: ) ⇒ Object

Log probability information for the choice.

Parameters:



# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 365

Instance Attribute Details

#choicesArray<OpenAI::Models::Chat::ChatCompletionChunk::Choice>

A list of chat completion choices. Can contain more than one elements if ‘n` is greater than 1. Can also be empty for the last chunk if you set `stream_options: true`.



19
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 19

required :choices, -> { OpenAI::Internal::Type::ArrayOf[OpenAI::Chat::ChatCompletionChunk::Choice] }

#createdInteger

The Unix timestamp (in seconds) of when the chat completion was created. Each chunk has the same timestamp.

Returns:

  • (Integer)


26
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 26

required :created, Integer

#idString

A unique identifier for the chat completion. Each chunk has the same ID.

Returns:

  • (String)


11
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 11

required :id, String

#modelString

The model to generate the completion.

Returns:

  • (String)


32
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 32

required :model, String

#objectSymbol, :"chat.completion.chunk"

The object type, which is always ‘chat.completion.chunk`.

Returns:

  • (Symbol, :"chat.completion.chunk")


38
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 38

required :object, const: :"chat.completion.chunk"

#service_tierSymbol, ...

Specifies the latency tier to use for processing the request. This parameter is relevant for customers subscribed to the scale tier service:

  • If set to ‘auto’, and the Project is Scale tier enabled, the system will utilize scale tier credits until they are exhausted.

  • If set to ‘auto’, and the Project is not Scale tier enabled, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.

  • If set to ‘default’, the request will be processed using the default service tier with a lower uptime SLA and no latency guarentee.

  • If set to ‘flex’, the request will be processed with the Flex Processing service tier. [Learn more](platform.openai.com/docs/guides/flex-processing).

  • When not set, the default behavior is ‘auto’.

When this parameter is set, the response body will include the ‘service_tier` utilized.



60
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 60

optional :service_tier, enum: -> { OpenAI::Chat::ChatCompletionChunk::ServiceTier }, nil?: true

#system_fingerprintString?

This fingerprint represents the backend configuration that the model runs with. Can be used in conjunction with the ‘seed` request parameter to understand when backend changes have been made that might impact determinism.

Returns:

  • (String, nil)


68
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 68

optional :system_fingerprint, String

#usageOpenAI::Models::CompletionUsage?

An optional field that will only be present when you set ‘stream_options: true` in your request. When present, it contains a null value **except for the last chunk** which contains the token usage statistics for the entire request.

NOTE: If the stream is interrupted or cancelled, you may not receive the final usage chunk which contains the total token usage for the request.



80
# File 'lib/openai/models/chat/chat_completion_chunk.rb', line 80

optional :usage, -> { OpenAI::CompletionUsage }, nil?: true