Class: Inquirex::LLM::Node
- Inherits:
-
Node
- Object
- Node
- Inquirex::LLM::Node
- Defined in:
- lib/inquirex/llm/node.rb
Overview
Enriched node for LLM-powered steps. Extends Inquirex::Node with attributes needed by the server-side LLM adapter: prompt template, output schema, source step references, and model configuration.
LLM verbs:
:clarify — extract structured data from a free-text answer
:describe — generate natural-language text from structured data
:summarize — produce a summary of all or selected answers
:detour — dynamically generate follow-up questions based on an answer
All LLM nodes are collecting (they produce answers) and require server round-trips. The frontend shows a “thinking” state while the server processes.
Constant Summary collapse
- LLM_VERBS =
%i[clarify describe summarize detour].freeze
Instance Attribute Summary collapse
-
#fallback ⇒ Proc?
readonly
server-side fallback (stripped from JSON).
-
#from_all ⇒ Boolean
readonly
whether to pass all collected answers to the LLM.
-
#from_steps ⇒ Array<Symbol>
readonly
source step ids whose answers feed the LLM.
-
#max_tokens ⇒ Integer?
readonly
optional max output tokens.
-
#model ⇒ Symbol?
readonly
optional model hint (e.g. :claude_sonnet).
-
#prompt ⇒ String
readonly
LLM prompt template.
-
#schema ⇒ Schema?
readonly
expected output structure (required for clarify/detour).
-
#temperature ⇒ Float?
readonly
optional sampling temperature.
Class Method Summary collapse
-
.from_h(id, hash) ⇒ LLM::Node
Deserializes from a plain Hash (string or symbol keys).
-
.llm_verb?(verb) ⇒ Boolean
Whether this verb is a recognized LLM verb.
Instance Method Summary collapse
-
#collecting? ⇒ Boolean
LLM verbs always collect output (the LLM provides the “answer”).
-
#display? ⇒ Boolean
LLM verbs are never display-only.
-
#initialize(prompt:, schema: nil, from_steps: [], from_all: false, model: nil, temperature: nil, max_tokens: nil, fallback: nil) ⇒ Node
constructor
A new instance of Node.
-
#llm_verb? ⇒ Boolean
Whether this is an LLM-powered step requiring server processing.
-
#to_h ⇒ Hash
Serializes to a plain Hash.
Constructor Details
#initialize(prompt:, schema: nil, from_steps: [], from_all: false, model: nil, temperature: nil, max_tokens: nil, fallback: nil) ⇒ Node
Returns a new instance of Node.
38 39 40 41 42 43 44 45 46 47 48 49 |
# File 'lib/inquirex/llm/node.rb', line 38 def initialize(prompt:, schema: nil, from_steps: [], from_all: false, model: nil, temperature: nil, max_tokens: nil, fallback: nil, **) @prompt = prompt @schema = schema @from_steps = Array(from_steps).map(&:to_sym).freeze @from_all = !!from_all @model = model&.to_sym @temperature = temperature&.to_f @max_tokens = max_tokens&.to_i @fallback = fallback super(**) end |
Instance Attribute Details
#fallback ⇒ Proc? (readonly)
server-side fallback (stripped from JSON)
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def fallback @fallback end |
#from_all ⇒ Boolean (readonly)
whether to pass all collected answers to the LLM
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def from_all @from_all end |
#from_steps ⇒ Array<Symbol> (readonly)
source step ids whose answers feed the LLM
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def from_steps @from_steps end |
#max_tokens ⇒ Integer? (readonly)
optional max output tokens
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def max_tokens @max_tokens end |
#model ⇒ Symbol? (readonly)
optional model hint (e.g. :claude_sonnet)
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def model @model end |
#prompt ⇒ String (readonly)
LLM prompt template
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def prompt @prompt end |
#schema ⇒ Schema? (readonly)
expected output structure (required for clarify/detour)
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def schema @schema end |
#temperature ⇒ Float? (readonly)
optional sampling temperature
26 27 28 |
# File 'lib/inquirex/llm/node.rb', line 26 def temperature @temperature end |
Class Method Details
.from_h(id, hash) ⇒ LLM::Node
Deserializes from a plain Hash (string or symbol keys).
91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 |
# File 'lib/inquirex/llm/node.rb', line 91 def self.from_h(id, hash) verb = hash["verb"] || hash[:verb] question = hash["question"] || hash[:question] text = hash["text"] || hash[:text] transitions_data = hash["transitions"] || hash[:transitions] || [] skip_if_data = hash["skip_if"] || hash[:skip_if] llm_data = hash["llm"] || hash[:llm] || {} transitions = transitions_data.map { |t| Inquirex::Transition.from_h(t) } skip_if = skip_if_data ? Inquirex::Rules::Base.from_h(skip_if_data) : nil prompt = llm_data["prompt"] || llm_data[:prompt] schema_raw = llm_data["schema"] || llm_data[:schema] from_raw = llm_data["from_steps"] || llm_data[:from_steps] || [] from_all = llm_data["from_all"] || llm_data[:from_all] || false model = llm_data["model"] || llm_data[:model] temp = llm_data["temperature"] || llm_data[:temperature] max_tok = llm_data["max_tokens"] || llm_data[:max_tokens] schema = schema_raw ? Schema.from_h(schema_raw) : nil from_steps = from_raw.map(&:to_sym) new( id:, verb:, question:, text:, transitions:, skip_if:, prompt:, schema:, from_steps:, from_all:, model:, temperature: temp, max_tokens: max_tok ) end |
.llm_verb?(verb) ⇒ Boolean
Whether this verb is a recognized LLM verb.
131 132 133 |
# File 'lib/inquirex/llm/node.rb', line 131 def self.llm_verb?(verb) LLM_VERBS.include?(verb.to_sym) end |
Instance Method Details
#collecting? ⇒ Boolean
LLM verbs always collect output (the LLM provides the “answer”).
53 |
# File 'lib/inquirex/llm/node.rb', line 53 def collecting? = true |
#display? ⇒ Boolean
LLM verbs are never display-only.
56 |
# File 'lib/inquirex/llm/node.rb', line 56 def display? = false |
#llm_verb? ⇒ Boolean
Whether this is an LLM-powered step requiring server processing.
59 |
# File 'lib/inquirex/llm/node.rb', line 59 def llm_verb? = true |
#to_h ⇒ Hash
Serializes to a plain Hash. LLM metadata is nested under “llm”. Fallback procs are stripped (server-side only). All transitions are marked requires_server: true.
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
# File 'lib/inquirex/llm/node.rb', line 66 def to_h hash = { "verb" => @verb.to_s } hash["question"] = @question if @question hash["text"] = @text if @text hash["transitions"] = @transitions.map(&:to_h) unless @transitions.empty? hash["skip_if"] = @skip_if.to_h if @skip_if hash["requires_server"] = true llm_hash = { "prompt" => @prompt } llm_hash["schema"] = @schema.to_h if @schema llm_hash["from_steps"] = @from_steps.map(&:to_s) unless @from_steps.empty? llm_hash["from_all"] = true if @from_all llm_hash["model"] = @model.to_s if @model llm_hash["temperature"] = @temperature if @temperature llm_hash["max_tokens"] = @max_tokens if @max_tokens hash["llm"] = llm_hash hash end |