Class: LlmMetaClient::ServerResource
- Inherits:
-
Object
- Object
- LlmMetaClient::ServerResource
- Defined in:
- lib/llm_meta_client/server_resource.rb
Constant Summary collapse
- FAMILY_DISPLAY_NAMES =
This is a non-persisted model for fetching external server prompts
{ "openai" => "OpenAI", "anthropic" => "Anthropic", "google" => "Google", "ollama" => "Ollama" }.freeze
Class Method Summary collapse
-
.available_llm_families(jwt_token) ⇒ Object
Retrieve LLM families with their API keys grouped by llm_type Returns: [llm_type:, api_keys: [{uuid:, description:, available_models:]}].
-
.available_llm_options(jwt_token) ⇒ Object
Retrieve LLM options available for user selection (API Keys + Ollama) For guest users (no jwt_token), only Ollama is returned.
- .fetch_mcp_servers(jwt_token) ⇒ Object
- .fetch_mcp_tools(jwt_token, mcp_server_uuid) ⇒ Object
Class Method Details
.available_llm_families(jwt_token) ⇒ Object
Retrieve LLM families with their API keys grouped by llm_type Returns: [llm_type:, api_keys: [{uuid:, description:, available_models:]}]
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
# File 'lib/llm_meta_client/server_resource.rb', line 37 def available_llm_families(jwt_token) if jwt_token.blank? # Guest users: only Ollama return build_families(, []) end api_keys = llm_api_keys(jwt_token) ollama_opts = begin rescue LlmMetaClient::Exceptions::OllamaUnavailableError => e Rails.logger.warn "Ollama unavailable: #{e.}" raise e if api_keys.empty? [] end build_families(ollama_opts, api_keys) end |
.available_llm_options(jwt_token) ⇒ Object
Retrieve LLM options available for user selection (API Keys + Ollama) For guest users (no jwt_token), only Ollama is returned
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
# File 'lib/llm_meta_client/server_resource.rb', line 15 def (jwt_token) # For guest users: Ollama is required # return only Ollama return format if jwt_token.blank? # Logged-in user: return API Keys + Ollama (if available) = llm_api_keys jwt_token # Try to add Ollama, but don't fail if unavailable begin .concat rescue LlmMetaClient::Exceptions::OllamaUnavailableError => e Rails.logger.warn "Ollama unavailable: #{e.}" # Continue with API Keys only if at least one is available raise e if .empty? end format end |
.fetch_mcp_servers(jwt_token) ⇒ Object
56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
# File 'lib/llm_meta_client/server_resource.rb', line 56 def fetch_mcp_servers(jwt_token) return [] if jwt_token.blank? response = authenticated_get(jwt_token, "api/mcp_servers") if response.success? response.parsed_response["mcp_servers"] || [] else Rails.logger.error "Failed to fetch MCP servers: HTTP #{response.code}" [] end rescue StandardError => e Rails.logger.error "Error fetching MCP servers: #{e.class} - #{e.}" [] end |
.fetch_mcp_tools(jwt_token, mcp_server_uuid) ⇒ Object
72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 |
# File 'lib/llm_meta_client/server_resource.rb', line 72 def fetch_mcp_tools(jwt_token, mcp_server_uuid) return [] if jwt_token.blank? || mcp_server_uuid.blank? response = authenticated_get(jwt_token, "api/mcp_servers/#{mcp_server_uuid}/tools") if response.success? response.parsed_response["tools"] || [] else Rails.logger.error "Failed to fetch MCP tools for #{mcp_server_uuid}: HTTP #{response.code}" [] end rescue StandardError => e Rails.logger.error "Error fetching MCP tools: #{e.class} - #{e.}" [] end |