Module: Legion::LLM::Prompt
- Defined in:
- lib/legion/llm/prompt.rb
Class Method Summary collapse
-
.decide(question, options:, tools: []) ⇒ Object
Pick from a set of options with reasoning.
-
.dispatch(message, intent: nil, tier: nil, exclude: {}, provider: nil, model: nil, schema: nil, tools: nil, escalate: nil, max_escalations: 3, thinking: nil, temperature: nil, max_tokens: nil, tracing: nil, agent: nil, caller: nil, cache: nil, quality_check: nil) ⇒ Object
Auto-routed: Router picks the best provider+model based on intent.
-
.extract(text, schema:, tools: []) ⇒ Object
Extract structured data from unstructured text.
-
.request(message, provider:, model:, intent: nil, tier: nil, schema: nil, tools: nil, escalate: nil, max_escalations: 3, thinking: nil, temperature: nil, max_tokens: nil, tracing: nil, agent: nil, caller: nil, cache: nil, quality_check: nil) ⇒ Object
Pinned: caller specifies exact provider+model.
-
.summarize(messages, tools: []) ⇒ Object
Condense a conversation or feedback history into a shorter form.
Class Method Details
.decide(question, options:, tools: []) ⇒ Object
Pick from a set of options with reasoning.
112 113 114 115 |
# File 'lib/legion/llm/prompt.rb', line 112 def decide(question, options:, tools: [], **) prompt = build_decide_prompt(question, ) dispatch(prompt, tools: tools, **) end |
.dispatch(message, intent: nil, tier: nil, exclude: {}, provider: nil, model: nil, schema: nil, tools: nil, escalate: nil, max_escalations: 3, thinking: nil, temperature: nil, max_tokens: nil, tracing: nil, agent: nil, caller: nil, cache: nil, quality_check: nil) ⇒ Object
Auto-routed: Router picks the best provider+model based on intent. Primary entry point for most LLM calls. When provider/model are passed explicitly, they take precedence over routing.
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 |
# File 'lib/legion/llm/prompt.rb', line 11 def dispatch(, # rubocop:disable Metrics/ParameterLists intent: nil, tier: nil, exclude: {}, provider: nil, model: nil, schema: nil, tools: nil, escalate: nil, max_escalations: 3, thinking: nil, temperature: nil, max_tokens: nil, tracing: nil, agent: nil, caller: nil, cache: nil, quality_check: nil, **) resolved_provider = provider resolved_model = model if resolved_provider.nil? && resolved_model.nil? && defined?(Router) && Router.routing_enabled? && (intent || tier) resolution = Router.resolve(intent: intent, tier: tier, exclude: exclude) resolved_provider = resolution&.provider resolved_model = resolution&.model end resolved_provider ||= Legion::LLM.settings[:default_provider] resolved_model ||= Legion::LLM.settings[:default_model] request(, provider: resolved_provider, model: resolved_model, intent: intent, tier: tier, schema: schema, tools: tools, escalate: escalate, max_escalations: max_escalations, thinking: thinking, temperature: temperature, max_tokens: max_tokens, tracing: tracing, agent: agent, caller: caller, cache: cache, quality_check: quality_check, **) end |
.extract(text, schema:, tools: []) ⇒ Object
Extract structured data from unstructured text.
106 107 108 109 |
# File 'lib/legion/llm/prompt.rb', line 106 def extract(text, schema:, tools: [], **) prompt = build_extract_prompt(text) dispatch(prompt, schema: schema, tools: tools, **) end |
.request(message, provider:, model:, intent: nil, tier: nil, schema: nil, tools: nil, escalate: nil, max_escalations: 3, thinking: nil, temperature: nil, max_tokens: nil, tracing: nil, agent: nil, caller: nil, cache: nil, quality_check: nil) ⇒ Object
Pinned: caller specifies exact provider+model. Full pipeline runs in-process.
63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 |
# File 'lib/legion/llm/prompt.rb', line 63 def request(, # rubocop:disable Metrics/ParameterLists provider:, model:, intent: nil, tier: nil, schema: nil, tools: nil, escalate: nil, max_escalations: 3, thinking: nil, temperature: nil, max_tokens: nil, tracing: nil, agent: nil, caller: nil, cache: nil, quality_check: nil, **) if provider.nil? || model.nil? raise LLMError, "Prompt.request: provider and model must be set (got provider=#{provider.inspect}, model=#{model.inspect}). " \ 'Configure Legion::Settings[:llm][:default_provider] and [:default_model], or pass them explicitly.' end pipeline_request = build_pipeline_request( , provider: provider, model: model, intent: intent, tier: tier, schema: schema, tools: tools, escalate: escalate, max_escalations: max_escalations, thinking: thinking, temperature: temperature, max_tokens: max_tokens, tracing: tracing, agent: agent, caller: caller, cache: cache, quality_check: quality_check, ** ) executor = Pipeline::Executor.new(pipeline_request) executor.call end |
.summarize(messages, tools: []) ⇒ Object
Condense a conversation or feedback history into a shorter form.
100 101 102 103 |
# File 'lib/legion/llm/prompt.rb', line 100 def summarize(, tools: [], **) prompt = build_summarize_prompt() dispatch(prompt, tools: tools, **) end |