Class: OmniAI::Anthropic::Chat

Inherits:
Chat
  • Object
show all
Defined in:
lib/omniai/anthropic/chat.rb,
lib/omniai/anthropic/chat/stream.rb,
lib/omniai/anthropic/chat/url_serializer.rb,
lib/omniai/anthropic/chat/file_serializer.rb,
lib/omniai/anthropic/chat/text_serializer.rb,
lib/omniai/anthropic/chat/tool_serializer.rb,
lib/omniai/anthropic/chat/choice_serializer.rb,
lib/omniai/anthropic/chat/content_serializer.rb,
lib/omniai/anthropic/chat/message_serializer.rb,
lib/omniai/anthropic/chat/function_serializer.rb,
lib/omniai/anthropic/chat/response_serializer.rb,
lib/omniai/anthropic/chat/thinking_serializer.rb,
lib/omniai/anthropic/chat/tool_call_serializer.rb,
lib/omniai/anthropic/chat/tool_call_result_serializer.rb

Overview

An Anthropic chat implementation.

Usage:

completion = OmniAI::Anthropic::Chat.process!(client: client) do |prompt|
  prompt.system('You are an expert in the field of AI.')
  prompt.user('What are the biggest risks of AI?')
end
completion.text # '...'

Defined Under Namespace

Modules: ChoiceSerializer, ContentSerializer, FileSerializer, FunctionSerializer, MessageSerializer, Model, ResponseSerializer, TextSerializer, ThinkingSerializer, ToolCallResultSerializer, ToolCallSerializer, ToolSerializer, URLSerializer Classes: Stream

Constant Summary collapse

DEFAULT_MODEL =
Model::CLAUDE_SONNET
CONTEXT =

Returns:

  • (Context)
Context.build do |context|
  context.serializers[:tool] = ToolSerializer.method(:serialize)

  context.serializers[:file] = FileSerializer.method(:serialize)
  context.serializers[:url] = URLSerializer.method(:serialize)

  context.serializers[:choice] = ChoiceSerializer.method(:serialize)
  context.deserializers[:choice] = ChoiceSerializer.method(:deserialize)

  context.serializers[:tool_call] = ToolCallSerializer.method(:serialize)
  context.deserializers[:tool_call] = ToolCallSerializer.method(:deserialize)

  context.serializers[:tool_call_result] = ToolCallResultSerializer.method(:serialize)
  context.deserializers[:tool_call_result] = ToolCallResultSerializer.method(:deserialize)

  context.serializers[:function] = FunctionSerializer.method(:serialize)
  context.deserializers[:function] = FunctionSerializer.method(:deserialize)

  context.serializers[:message] = MessageSerializer.method(:serialize)
  context.deserializers[:message] = MessageSerializer.method(:deserialize)

  context.deserializers[:content] = ContentSerializer.method(:deserialize)
  context.deserializers[:response] = ResponseSerializer.method(:deserialize)

  context.serializers[:thinking] = ThinkingSerializer.method(:serialize)
  context.deserializers[:thinking] = ThinkingSerializer.method(:deserialize)
end
ADAPTIVE_THINKING_MIN_TOKENS =

Adaptive thinking can consume any portion of max_tokens before emitting output. Without a floor, callers using adaptive with the legacy default (4096) silently get empty responses. This constant guarantees enough headroom for thinking + output.

32_768

Instance Method Summary collapse

Instance Method Details

#max_tokensObject

Resolved max_tokens. Precedence: thinking floor (when budget set) > per-call kwarg. Returns nil when neither is set, so the config default flows through unchanged.



110
111
112
# File 'lib/omniai/anthropic/chat.rb', line 110

def max_tokens
  thinking_max_tokens || @options[:max_tokens]
end

#messagesArray<Hash>

Returns:

  • (Array<Hash>)


160
161
162
163
# File 'lib/omniai/anthropic/chat.rb', line 160

def messages
  messages = @prompt.messages.reject(&:system?)
  messages.map { |message| message.serialize(context:) }
end

#pathString

Returns:

  • (String)


175
176
177
# File 'lib/omniai/anthropic/chat.rb', line 175

def path
  "/#{Client::VERSION}/messages"
end

#payloadHash

NOTE: Anthropic requires temperature=1 (default) when thinking is enabled, so temperature is omitted from the payload when thinking_config is present.

Returns:

  • (Hash)


93
94
95
96
97
98
99
100
101
102
103
104
105
106
# File 'lib/omniai/anthropic/chat.rb', line 93

def payload
  OmniAI::Anthropic.config.chat_options
    .merge({
      model: @model,
      messages:,
      system:,
      stream: stream? || nil,
      temperature: thinking_config ? nil : @temperature,
      tools: tools_payload,
      thinking: thinking_config,
    })
    .merge({ max_tokens:, output_config: }.compact)
    .compact
end

#systemString?

Returns:

  • (String, nil)


166
167
168
169
170
171
172
# File 'lib/omniai/anthropic/chat.rb', line 166

def system
  parts = @prompt.messages.filter(&:system?).filter(&:text?).map(&:text)
  parts << formatting if formatting?
  return if parts.empty?

  parts.join("\n\n")
end

#thinking_configHash?

Translates unified thinking option to Anthropic’s native format. Example: ‘thinking: { budget_tokens: 10000 }` becomes `{ type: “enabled”, budget_tokens: 10000 }` Example: `thinking: { effort: nil }` becomes `{ type: “adaptive” }` (Claude decides) Example: `thinking: { effort: “medium” }` becomes `{ type: “adaptive” }` + output_config

Returns:

  • (Hash, nil)


119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
# File 'lib/omniai/anthropic/chat.rb', line 119

def thinking_config
  return @thinking_config if defined?(@thinking_config)

  thinking = @options[:thinking]

  @thinking_config = case thinking
                     when true then { type: "enabled", budget_tokens: 10_000 }
                     when Hash
                       if thinking.key?(:effort)
                         { type: "adaptive" }
                       else
                         { type: "enabled" }.merge(thinking)
                       end
                     end
end

#thinking_max_tokensInteger?

Returns max_tokens ensuring enough headroom when thinking is in play. Enabled-mode path preserves the existing [base, budget+8000].max floor. Adaptive-mode path applies ADAPTIVE_THINKING_MIN_TOKENS as a safety floor.

Returns:

  • (Integer, nil)


144
145
146
147
148
149
150
151
152
153
154
155
156
157
# File 'lib/omniai/anthropic/chat.rb', line 144

def thinking_max_tokens
  return unless thinking_config

  case thinking_config[:type]
  when "adaptive"
    [@options[:max_tokens] || ADAPTIVE_THINKING_MIN_TOKENS, ADAPTIVE_THINKING_MIN_TOKENS].max
  when "enabled"
    budget = thinking_config[:budget_tokens]
    return unless budget

    base = @options[:max_tokens] || OmniAI::Anthropic.config.chat_options[:max_tokens] || 0
    [base, budget + 8_000].max
  end
end