Class: RubynCode::LLM::Adapters::Anthropic
- Includes:
- JsonParsing, PromptCaching
- Defined in:
- lib/rubyn_code/llm/adapters/anthropic.rb
Direct Known Subclasses
Constant Summary collapse
- API_URL =
'https://api.anthropic.com/v1/messages'- ANTHROPIC_VERSION =
'2023-06-01'- MAX_RETRIES =
3- RETRY_DELAYS =
[2, 5, 10].freeze
- AVAILABLE_MODELS =
%w[ claude-sonnet-4-20250514 claude-opus-4-6 claude-haiku-4-20250506 ].freeze
Constants included from PromptCaching
PromptCaching::CACHE_EPHEMERAL, PromptCaching::OAUTH_GATE
Instance Method Summary collapse
-
#chat(messages:, model:, max_tokens:, tools: nil, system: nil, on_text: nil, task_budget: nil) ⇒ Object
rubocop:disable Metrics/ParameterLists – mirrors LLM adapter interface.
- #models ⇒ Object
- #provider_name ⇒ Object
Instance Method Details
#chat(messages:, model:, max_tokens:, tools: nil, system: nil, on_text: nil, task_budget: nil) ⇒ Object
rubocop:disable Metrics/ParameterLists – mirrors LLM adapter interface
34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
# File 'lib/rubyn_code/llm/adapters/anthropic.rb', line 34 def chat(messages:, model:, max_tokens:, tools: nil, system: nil, on_text: nil, task_budget: nil) # rubocop:disable Metrics/ParameterLists -- mirrors LLM adapter interface ensure_valid_token! use_streaming = on_text && oauth_token? body = build_request_body( messages: , tools: tools, system: system, model: model, max_tokens: max_tokens, stream: use_streaming, task_budget: task_budget ) return stream_request(body, on_text) if use_streaming execute_with_retries(body, on_text) end |
#models ⇒ Object
30 31 32 |
# File 'lib/rubyn_code/llm/adapters/anthropic.rb', line 30 def models AVAILABLE_MODELS end |
#provider_name ⇒ Object
26 27 28 |
# File 'lib/rubyn_code/llm/adapters/anthropic.rb', line 26 def provider_name 'anthropic' end |