Class: ActiveCanvas::AiService
- Inherits:
-
Object
- Object
- ActiveCanvas::AiService
- Defined in:
- app/services/active_canvas/ai_service.rb
Defined Under Namespace
Classes: ImageValidationError, ScreenshotValidationError
Constant Summary collapse
- ALLOWED_SCREENSHOT_TYPES =
Allowed image types for screenshot-to-code
%w[png jpeg jpg webp gif].freeze
- IMAGE_MAGIC_BYTES =
Magic bytes for image type validation
{ "png" => "\x89PNG".b, "jpeg" => "\xFF\xD8\xFF".b, "jpg" => "\xFF\xD8\xFF".b, "webp" => "RIFF".b, "gif" => "GIF8".b }.freeze
- MAX_IMAGE_DOWNLOAD_SIZE =
Maximum download size for AI-generated images
10.megabytes
Class Method Summary collapse
- .generate_image(prompt:, model: nil) ⇒ Object
- .generate_text(prompt:, model: nil, context: nil, &block) ⇒ Object
- .screenshot_to_code(image_data:, model: nil, additional_prompt: nil) ⇒ Object
Class Method Details
.generate_image(prompt:, model: nil) ⇒ Object
36 37 38 39 40 41 42 |
# File 'app/services/active_canvas/ai_service.rb', line 36 def generate_image(prompt:, model: nil) AiConfiguration.configure_ruby_llm! model ||= Setting.ai_default_image_model image = RubyLLM.paint(prompt, model: model) store_generated_image(image.url, prompt) end |
.generate_text(prompt:, model: nil, context: nil, &block) ⇒ Object
22 23 24 25 26 27 28 29 30 31 32 33 34 |
# File 'app/services/active_canvas/ai_service.rb', line 22 def generate_text(prompt:, model: nil, context: nil, &block) AiConfiguration.configure_ruby_llm! model ||= Setting.ai_default_text_model chat = RubyLLM.chat(model: model) chat.with_instructions(build_system_prompt(context)) if context.present? if block_given? chat.ask(prompt, &block) else chat.ask(prompt) end end |
.screenshot_to_code(image_data:, model: nil, additional_prompt: nil) ⇒ Object
44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
# File 'app/services/active_canvas/ai_service.rb', line 44 def screenshot_to_code(image_data:, model: nil, additional_prompt: nil) AiConfiguration.configure_ruby_llm! model ||= Setting.ai_default_vision_model framework = Setting.css_framework prompt = build_screenshot_prompt(framework, additional_prompt) # RubyLLM expects a file path, not a data URL # Save base64 image to a temp file tempfile = save_base64_to_tempfile(image_data) begin chat = RubyLLM.chat(model: model) response = chat.ask(prompt, with: { image: tempfile.path }) extract_html(response.content) ensure tempfile.close tempfile.unlink end end |