Class: GeminiAI::Client
- Inherits:
-
Object
- Object
- GeminiAI::Client
- Defined in:
- lib/core/client.rb
Overview
Core client class for Gemini AI API communication
Constant Summary collapse
- BASE_URL =
'https://generativelanguage.googleapis.com/v1/models'- MODELS =
Model mappings Current supported models
{ # Gemini 2.5 models (latest) pro: 'gemini-2.5-pro', flash: 'gemini-2.5-flash', # Gemini 2.0 models flash_2_0: 'gemini-2.0-flash', flash_lite: 'gemini-2.0-flash-lite', # Legacy aliases for backward compatibility pro_2_0: 'gemini-2.0-flash' }.freeze
- DEPRECATED_MODELS =
Deprecated models removed in this version (log warning and default to :pro)
{ pro_1_5: 'gemini-1.5-pro', flash_1_5: 'gemini-1.5-flash', flash_8b: 'gemini-1.5-flash-8b' }.freeze
Class Method Summary collapse
-
.logger ⇒ Object
Configure logging.
Instance Method Summary collapse
- #chat(messages, options = {}) ⇒ Object
- #generate_image_text(image_base64, prompt, options = {}) ⇒ Object
- #generate_text(prompt, options = {}) ⇒ Object
-
#initialize(api_key = nil, model: :pro) ⇒ Client
constructor
A new instance of Client.
Constructor Details
#initialize(api_key = nil, model: :pro) ⇒ Client
Returns a new instance of Client.
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
# File 'lib/core/client.rb', line 51 def initialize(api_key = nil, model: :pro) # Prioritize passed API key, then environment variable @api_key = api_key || ENV.fetch('GEMINI_API_KEY', nil) # Rate limiting - track last request time @last_request_time = nil # More conservative rate limiting in CI environments @min_request_interval = ENV['CI'] == 'true' || ENV['GITHUB_ACTIONS'] == 'true' ? 3.0 : 1.0 # Extensive logging for debugging self.class.logger.debug('Initializing Client') self.class.logger.debug("API Key present: #{!@api_key.nil?}") self.class.logger.debug("API Key length: #{@api_key&.length}") # Validate API key before proceeding validate_api_key! @model = resolve_model(model) self.class.logger.debug("Selected model: #{@model}") end |
Class Method Details
.logger ⇒ Object
Configure logging
40 41 42 43 44 45 46 47 48 49 |
# File 'lib/core/client.rb', line 40 def self.logger @logger ||= Logger.new($stdout).tap do |log| log.level = Logger::DEBUG log.formatter = proc do |severity, datetime, _progname, msg| # Mask any potential API key in logs masked_msg = msg.to_s.gsub(/AIza[a-zA-Z0-9_-]{35,}/, '[REDACTED]') "#{datetime}: #{severity} -- #{masked_msg}\n" end end end |
Instance Method Details
#chat(messages, options = {}) ⇒ Object
111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
# File 'lib/core/client.rb', line 111 def chat(, = {}) request_body = { contents: .map { |msg| { role: msg[:role], parts: [{ text: msg[:content] }] } }, generationConfig: build_generation_config() } # Add system instruction if provided if [:system_instruction] request_body[:systemInstruction] = { parts: [ { text: [:system_instruction] } ] } end apply_moderation(send_request(request_body), ) end |
#generate_image_text(image_base64, prompt, options = {}) ⇒ Object
94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 |
# File 'lib/core/client.rb', line 94 def generate_image_text(image_base64, prompt, = {}) raise Error, 'Image is required' if image_base64.nil? || image_base64.empty? request_body = { contents: [ { parts: [ { inline_data: { mime_type: 'image/jpeg', data: image_base64 } }, { text: prompt } ] } ], generationConfig: build_generation_config() } # Use the pro model for image-to-text tasks apply_moderation(send_request(request_body, model: :pro), ) end |
#generate_text(prompt, options = {}) ⇒ Object
73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
# File 'lib/core/client.rb', line 73 def generate_text(prompt, = {}) validate_prompt!(prompt) request_body = { contents: [{ parts: [{ text: prompt }] }], generationConfig: build_generation_config() } # Add safety settings if provided if [:safety_settings] request_body[:safetySettings] = [:safety_settings].map do |setting| { category: setting[:category], threshold: setting[:threshold] } end end apply_moderation(send_request(request_body), ) end |