Class: OpenAI::Models::Beta::AssistantUpdateParams
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Beta::AssistantUpdateParams
- Extended by:
- Internal::Type::RequestParameters::Converter
- Includes:
- Internal::Type::RequestParameters
- Defined in:
- lib/openai/models/beta/assistant_update_params.rb
Overview
Defined Under Namespace
Modules: Model Classes: ToolResources
Instance Attribute Summary collapse
-
#description ⇒ String?
The description of the assistant.
-
#instructions ⇒ String?
The system instructions that the assistant uses.
-
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object.
-
#model ⇒ String, ...
ID of the model to use.
-
#name ⇒ String?
The name of the assistant.
-
#reasoning_effort ⇒ Symbol, ...
**o-series models only**.
-
#response_format ⇒ Symbol, ...
Specifies the format that the model must output.
-
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2.
-
#tool_resources ⇒ OpenAI::Models::Beta::AssistantUpdateParams::ToolResources?
A set of resources that are used by the assistant’s tools.
-
#tools ⇒ Array<OpenAI::Models::Beta::CodeInterpreterTool, OpenAI::Models::Beta::FileSearchTool, OpenAI::Models::Beta::FunctionTool>?
A list of tool enabled on the assistant.
-
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
Attributes included from Internal::Type::RequestParameters
Instance Method Summary collapse
-
#initialize(description: nil, instructions: nil, metadata: nil, model: nil, name: nil, reasoning_effort: nil, response_format: nil, temperature: nil, tool_resources: nil, tools: nil, top_p: nil, request_options: {}) ⇒ Object
constructor
Some parameter documentations has been truncated, see AssistantUpdateParams for more details.
Methods included from Internal::Type::RequestParameters::Converter
Methods included from Internal::Type::RequestParameters
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, #inspect, inspect, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(description: nil, instructions: nil, metadata: nil, model: nil, name: nil, reasoning_effort: nil, response_format: nil, temperature: nil, tool_resources: nil, tools: nil, top_p: nil, request_options: {}) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Beta::AssistantUpdateParams for more details.
|
# File 'lib/openai/models/beta/assistant_update_params.rb', line 122
|
Instance Attribute Details
#description ⇒ String?
The description of the assistant. The maximum length is 512 characters.
15 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 15 optional :description, String, nil?: true |
#instructions ⇒ String?
The system instructions that the assistant uses. The maximum length is 256,000 characters.
22 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 22 optional :instructions, String, nil?: true |
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
33 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 33 optional :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true |
#model ⇒ String, ...
ID of the model to use. You can use the [List models](platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](platform.openai.com/docs/models) for descriptions of them.
43 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 43 optional :model, union: -> { OpenAI::Beta::AssistantUpdateParams::Model } |
#name ⇒ String?
The name of the assistant. The maximum length is 256 characters.
49 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 49 optional :name, String, nil?: true |
#reasoning_effort ⇒ Symbol, ...
**o-series models only**
Constrains effort on reasoning for [reasoning models](platform.openai.com/docs/guides/reasoning). Currently supported values are ‘low`, `medium`, and `high`. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.
60 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 60 optional :reasoning_effort, enum: -> { OpenAI::ReasoningEffort }, nil?: true |
#response_format ⇒ Symbol, ...
Specifies the format that the model must output. Compatible with [GPT-4o](platform.openai.com/docs/models#gpt-4o), [GPT-4 Turbo](platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since ‘gpt-3.5-turbo-1106`.
Setting to ‘{ “type”: “json_schema”, “json_schema”: … }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](platform.openai.com/docs/guides/structured-outputs).
Setting to ‘{ “type”: “json_object” }` enables JSON mode, which ensures the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if ‘finish_reason=“length”`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
85 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 85 optional :response_format, union: -> { OpenAI::Beta::AssistantResponseFormatOption }, nil?: true |
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
93 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 93 optional :temperature, Float, nil?: true |
#tool_resources ⇒ OpenAI::Models::Beta::AssistantUpdateParams::ToolResources?
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the ‘code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
102 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 102 optional :tool_resources, -> { OpenAI::Beta::AssistantUpdateParams::ToolResources }, nil?: true |
#tools ⇒ Array<OpenAI::Models::Beta::CodeInterpreterTool, OpenAI::Models::Beta::FileSearchTool, OpenAI::Models::Beta::FunctionTool>?
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types ‘code_interpreter`, `file_search`, or `function`.
110 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 110 optional :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Beta::AssistantTool] } |
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
120 |
# File 'lib/openai/models/beta/assistant_update_params.rb', line 120 optional :top_p, Float, nil?: true |