Class: OpenAI::Models::Beta::Threads::Run::TruncationStrategy

Inherits:
Internal::Type::BaseModel show all
Defined in:
lib/openai/models/beta/threads/run.rb

Overview

Defined Under Namespace

Modules: Type

Instance Attribute Summary collapse

Instance Method Summary collapse

Methods inherited from Internal::Type::BaseModel

==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml

Methods included from Internal::Type::Converter

#coerce, coerce, #dump, dump, #inspect, inspect, type_info

Methods included from Internal::Util::SorbetRuntimeSupport

#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type

Constructor Details

#initialize(type: , last_messages: nil) ⇒ Object

Some parameter documentations has been truncated, see OpenAI::Models::Beta::Threads::Run::TruncationStrategy for more details.

Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.

Parameters:

  • type (Symbol, OpenAI::Models::Beta::Threads::Run::TruncationStrategy::Type) (defaults to: )

    The truncation strategy to use for the thread. The default is ‘auto`. If set to

  • last_messages (Integer, nil) (defaults to: nil)

    The number of most recent messages from the thread when constructing the context



# File 'lib/openai/models/beta/threads/run.rb', line 413

Instance Attribute Details

#last_messagesInteger?

The number of most recent messages from the thread when constructing the context for the run.

Returns:

  • (Integer, nil)


411
# File 'lib/openai/models/beta/threads/run.rb', line 411

optional :last_messages, Integer, nil?: true

#typeSymbol, OpenAI::Models::Beta::Threads::Run::TruncationStrategy::Type

The truncation strategy to use for the thread. The default is ‘auto`. If set to `last_messages`, the thread will be truncated to the n most recent messages in the thread. When set to `auto`, messages in the middle of the thread will be dropped to fit the context length of the model, `max_prompt_tokens`.



404
# File 'lib/openai/models/beta/threads/run.rb', line 404

required :type, enum: -> { OpenAI::Beta::Threads::Run::TruncationStrategy::Type }