Class: OmniAI::Anthropic::Chat
- Inherits:
-
Chat
- Object
- Chat
- OmniAI::Anthropic::Chat
- Defined in:
- lib/omniai/anthropic/chat.rb,
lib/omniai/anthropic/chat/stream.rb,
lib/omniai/anthropic/chat/url_serializer.rb,
lib/omniai/anthropic/chat/file_serializer.rb,
lib/omniai/anthropic/chat/text_serializer.rb,
lib/omniai/anthropic/chat/tool_serializer.rb,
lib/omniai/anthropic/chat/choice_serializer.rb,
lib/omniai/anthropic/chat/content_serializer.rb,
lib/omniai/anthropic/chat/message_serializer.rb,
lib/omniai/anthropic/chat/function_serializer.rb,
lib/omniai/anthropic/chat/response_serializer.rb,
lib/omniai/anthropic/chat/thinking_serializer.rb,
lib/omniai/anthropic/chat/tool_call_serializer.rb,
lib/omniai/anthropic/chat/tool_call_result_serializer.rb
Overview
Defined Under Namespace
Modules: ChoiceSerializer, ContentSerializer, FileSerializer, FunctionSerializer, MessageSerializer, Model, ResponseSerializer, TextSerializer, ThinkingSerializer, ToolCallResultSerializer, ToolCallSerializer, ToolSerializer, URLSerializer Classes: Stream
Constant Summary collapse
- DEFAULT_MODEL =
Model::CLAUDE_SONNET
- CONTEXT =
Context.build do |context| context.serializers[:tool] = ToolSerializer.method(:serialize) context.serializers[:file] = FileSerializer.method(:serialize) context.serializers[:url] = URLSerializer.method(:serialize) context.serializers[:choice] = ChoiceSerializer.method(:serialize) context.deserializers[:choice] = ChoiceSerializer.method(:deserialize) context.serializers[:tool_call] = ToolCallSerializer.method(:serialize) context.deserializers[:tool_call] = ToolCallSerializer.method(:deserialize) context.serializers[:tool_call_result] = ToolCallResultSerializer.method(:serialize) context.deserializers[:tool_call_result] = ToolCallResultSerializer.method(:deserialize) context.serializers[:function] = FunctionSerializer.method(:serialize) context.deserializers[:function] = FunctionSerializer.method(:deserialize) context.serializers[:message] = MessageSerializer.method(:serialize) context.deserializers[:message] = MessageSerializer.method(:deserialize) context.deserializers[:content] = ContentSerializer.method(:deserialize) context.deserializers[:response] = ResponseSerializer.method(:deserialize) context.serializers[:thinking] = ThinkingSerializer.method(:serialize) context.deserializers[:thinking] = ThinkingSerializer.method(:deserialize) end
- ADAPTIVE_THINKING_MIN_TOKENS =
Adaptive thinking can consume any portion of max_tokens before emitting output. Without a floor, callers using adaptive with the legacy default (4096) silently get empty responses. This constant guarantees enough headroom for thinking + output.
32_768
Instance Method Summary collapse
-
#max_tokens ⇒ Object
Resolved max_tokens.
- #messages ⇒ Array<Hash>
- #path ⇒ String
-
#payload ⇒ Hash
NOTE: Anthropic requires temperature=1 (default) when thinking is enabled, so temperature is omitted from the payload when thinking_config is present.
- #system ⇒ String?
-
#thinking_config ⇒ Hash?
Translates unified thinking option to Anthropic’s native format.
-
#thinking_max_tokens ⇒ Integer?
Returns max_tokens ensuring enough headroom when thinking is in play.
Instance Method Details
#max_tokens ⇒ Object
Resolved max_tokens. Precedence: thinking floor (when budget set) > per-call kwarg. Returns nil when neither is set, so the config default flows through unchanged.
110 111 112 |
# File 'lib/omniai/anthropic/chat.rb', line 110 def max_tokens thinking_max_tokens || @options[:max_tokens] end |
#messages ⇒ Array<Hash>
160 161 162 163 |
# File 'lib/omniai/anthropic/chat.rb', line 160 def = @prompt..reject(&:system?) .map { || .serialize(context:) } end |
#path ⇒ String
175 176 177 |
# File 'lib/omniai/anthropic/chat.rb', line 175 def path "/#{Client::VERSION}/messages" end |
#payload ⇒ Hash
NOTE: Anthropic requires temperature=1 (default) when thinking is enabled, so temperature is omitted from the payload when thinking_config is present.
93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
# File 'lib/omniai/anthropic/chat.rb', line 93 def payload OmniAI::Anthropic.config. .merge({ model: @model, messages:, system:, stream: stream? || nil, temperature: thinking_config ? nil : @temperature, tools: tools_payload, thinking: thinking_config, }) .merge({ max_tokens:, output_config: }.compact) .compact end |
#system ⇒ String?
166 167 168 169 170 171 172 |
# File 'lib/omniai/anthropic/chat.rb', line 166 def system parts = @prompt..filter(&:system?).filter(&:text?).map(&:text) parts << formatting if formatting? return if parts.empty? parts.join("\n\n") end |
#thinking_config ⇒ Hash?
Translates unified thinking option to Anthropic’s native format. Example: ‘thinking: { budget_tokens: 10000 }` becomes `{ type: “enabled”, budget_tokens: 10000 }` Example: `thinking: { effort: nil }` becomes `{ type: “adaptive” }` (Claude decides) Example: `thinking: { effort: “medium” }` becomes `{ type: “adaptive” }` + output_config
119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 |
# File 'lib/omniai/anthropic/chat.rb', line 119 def thinking_config return @thinking_config if defined?(@thinking_config) thinking = @options[:thinking] @thinking_config = case thinking when true then { type: "enabled", budget_tokens: 10_000 } when Hash if thinking.key?(:effort) { type: "adaptive" } else { type: "enabled" }.merge(thinking) end end end |
#thinking_max_tokens ⇒ Integer?
Returns max_tokens ensuring enough headroom when thinking is in play. Enabled-mode path preserves the existing [base, budget+8000].max floor. Adaptive-mode path applies ADAPTIVE_THINKING_MIN_TOKENS as a safety floor.
144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
# File 'lib/omniai/anthropic/chat.rb', line 144 def thinking_max_tokens return unless thinking_config case thinking_config[:type] when "adaptive" [@options[:max_tokens] || ADAPTIVE_THINKING_MIN_TOKENS, ADAPTIVE_THINKING_MIN_TOKENS].max when "enabled" budget = thinking_config[:budget_tokens] return unless budget base = @options[:max_tokens] || OmniAI::Anthropic.config.[:max_tokens] || 0 [base, budget + 8_000].max end end |