Class: OpenAI::Models::Beta::Assistant
- Inherits:
-
Internal::Type::BaseModel
- Object
- Internal::Type::BaseModel
- OpenAI::Models::Beta::Assistant
- Defined in:
- lib/openai/models/beta/assistant.rb
Overview
Defined Under Namespace
Classes: ToolResources
Instance Attribute Summary collapse
-
#created_at ⇒ Integer
The Unix timestamp (in seconds) for when the assistant was created.
-
#description ⇒ String?
The description of the assistant.
-
#id ⇒ String
The identifier, which can be referenced in API endpoints.
-
#instructions ⇒ String?
The system instructions that the assistant uses.
-
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object.
-
#model ⇒ String
ID of the model to use.
-
#name ⇒ String?
The name of the assistant.
-
#object ⇒ Symbol, :assistant
The object type, which is always ‘assistant`.
-
#response_format ⇒ Symbol, ...
Specifies the format that the model must output.
-
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2.
-
#tool_resources ⇒ OpenAI::Models::Beta::Assistant::ToolResources?
A set of resources that are used by the assistant’s tools.
-
#tools ⇒ Array<OpenAI::Models::Beta::CodeInterpreterTool, OpenAI::Models::Beta::FileSearchTool, OpenAI::Models::Beta::FunctionTool>
A list of tool enabled on the assistant.
-
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass.
Instance Method Summary collapse
-
#initialize(id: , created_at: , description: , instructions: , metadata: , model: , name: , tools: , response_format: nil, temperature: nil, tool_resources: nil, top_p: nil, object: :assistant) ⇒ Object
constructor
Some parameter documentations has been truncated, see Assistant for more details.
Methods inherited from Internal::Type::BaseModel
==, #==, #[], coerce, #deconstruct_keys, #deep_to_h, dump, fields, hash, #hash, inherited, inspect, #inspect, known_fields, optional, recursively_to_h, required, #to_h, #to_json, #to_s, to_sorbet_type, #to_yaml
Methods included from Internal::Type::Converter
#coerce, coerce, #dump, dump, #inspect, inspect, type_info
Methods included from Internal::Util::SorbetRuntimeSupport
#const_missing, #define_sorbet_constant!, #sorbet_constant_defined?, #to_sorbet_type, to_sorbet_type
Constructor Details
#initialize(id: , created_at: , description: , instructions: , metadata: , model: , name: , tools: , response_format: nil, temperature: nil, tool_resources: nil, top_p: nil, object: :assistant) ⇒ Object
Some parameter documentations has been truncated, see OpenAI::Models::Beta::Assistant for more details.
Represents an ‘assistant` that can call the model and use tools.
|
# File 'lib/openai/models/beta/assistant.rb', line 126
|
Instance Attribute Details
#created_at ⇒ Integer
The Unix timestamp (in seconds) for when the assistant was created.
18 |
# File 'lib/openai/models/beta/assistant.rb', line 18 required :created_at, Integer |
#description ⇒ String?
The description of the assistant. The maximum length is 512 characters.
24 |
# File 'lib/openai/models/beta/assistant.rb', line 24 required :description, String, nil?: true |
#id ⇒ String
The identifier, which can be referenced in API endpoints.
12 |
# File 'lib/openai/models/beta/assistant.rb', line 12 required :id, String |
#instructions ⇒ String?
The system instructions that the assistant uses. The maximum length is 256,000 characters.
31 |
# File 'lib/openai/models/beta/assistant.rb', line 31 required :instructions, String, nil?: true |
#metadata ⇒ Hash{Symbol=>String}?
Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.
Keys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters.
42 |
# File 'lib/openai/models/beta/assistant.rb', line 42 required :metadata, OpenAI::Internal::Type::HashOf[String], nil?: true |
#model ⇒ String
ID of the model to use. You can use the [List models](platform.openai.com/docs/api-reference/models/list) API to see all of your available models, or see our [Model overview](platform.openai.com/docs/models) for descriptions of them.
52 |
# File 'lib/openai/models/beta/assistant.rb', line 52 required :model, String |
#name ⇒ String?
The name of the assistant. The maximum length is 256 characters.
58 |
# File 'lib/openai/models/beta/assistant.rb', line 58 required :name, String, nil?: true |
#object ⇒ Symbol, :assistant
The object type, which is always ‘assistant`.
64 |
# File 'lib/openai/models/beta/assistant.rb', line 64 required :object, const: :assistant |
#response_format ⇒ Symbol, ...
Specifies the format that the model must output. Compatible with [GPT-4o](platform.openai.com/docs/models#gpt-4o), [GPT-4 Turbo](platform.openai.com/docs/models#gpt-4-turbo-and-gpt-4), and all GPT-3.5 Turbo models since ‘gpt-3.5-turbo-1106`.
Setting to ‘{ “type”: “json_schema”, “json_schema”: … }` enables Structured Outputs which ensures the model will match your supplied JSON schema. Learn more in the [Structured Outputs guide](platform.openai.com/docs/guides/structured-outputs).
Setting to ‘{ “type”: “json_object” }` enables JSON mode, which ensures the message the model generates is valid JSON.
Important: when using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly “stuck” request. Also note that the message content may be partially cut off if ‘finish_reason=“length”`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
97 |
# File 'lib/openai/models/beta/assistant.rb', line 97 optional :response_format, union: -> { OpenAI::Beta::AssistantResponseFormatOption }, nil?: true |
#temperature ⇒ Float?
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
105 |
# File 'lib/openai/models/beta/assistant.rb', line 105 optional :temperature, Float, nil?: true |
#tool_resources ⇒ OpenAI::Models::Beta::Assistant::ToolResources?
A set of resources that are used by the assistant’s tools. The resources are specific to the type of tool. For example, the ‘code_interpreter` tool requires a list of file IDs, while the `file_search` tool requires a list of vector store IDs.
114 |
# File 'lib/openai/models/beta/assistant.rb', line 114 optional :tool_resources, -> { OpenAI::Beta::Assistant::ToolResources }, nil?: true |
#tools ⇒ Array<OpenAI::Models::Beta::CodeInterpreterTool, OpenAI::Models::Beta::FileSearchTool, OpenAI::Models::Beta::FunctionTool>
A list of tool enabled on the assistant. There can be a maximum of 128 tools per assistant. Tools can be of types ‘code_interpreter`, `file_search`, or `function`.
72 |
# File 'lib/openai/models/beta/assistant.rb', line 72 required :tools, -> { OpenAI::Internal::Type::ArrayOf[union: OpenAI::Beta::AssistantTool] } |
#top_p ⇒ Float?
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
124 |
# File 'lib/openai/models/beta/assistant.rb', line 124 optional :top_p, Float, nil?: true |