Class: Leann::Embedding::Ollama
- Defined in:
- lib/leann/embedding/ollama.rb
Overview
Ollama Embeddings API provider
Uses local Ollama server for computing embeddings. Requires Ollama to be running: ollama.com
Constant Summary collapse
- DEFAULT_HOST =
"http://localhost:11434"- EMBED_PATH =
"/api/embed"- MAX_BATCH_SIZE =
32- TIMEOUT =
60- POPULAR_MODELS =
Popular embedding models
%w[ nomic-embed-text mxbai-embed-large bge-m3 all-minilm snowflake-arctic-embed ].freeze
Instance Attribute Summary
Attributes inherited from Base
Instance Method Summary collapse
-
#compute(texts) ⇒ Array<Array<Float>>
Compute embeddings for texts.
-
#initialize(model: "nomic-embed-text", host: nil) ⇒ Ollama
constructor
A new instance of Ollama.
Methods inherited from Base
Constructor Details
#initialize(model: "nomic-embed-text", host: nil) ⇒ Ollama
Returns a new instance of Ollama.
36 37 38 39 40 41 42 43 |
# File 'lib/leann/embedding/ollama.rb', line 36 def initialize(model: "nomic-embed-text", host: nil) super(model: model) @host = host || Leann.configuration.ollama_host || ENV["OLLAMA_HOST"] || DEFAULT_HOST @dimensions = nil check_connection! end |
Instance Method Details
#compute(texts) ⇒ Array<Array<Float>>
Compute embeddings for texts
49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 |
# File 'lib/leann/embedding/ollama.rb', line 49 def compute(texts) return [] if texts.empty? = [] in_batches(texts, MAX_BATCH_SIZE) do |batch| = compute_batch(batch) .concat() print "." # Progress indicator end puts " Done! (#{.size} embeddings)" unless texts.size < MAX_BATCH_SIZE # Normalize embeddings (Ollama may not normalize by default) .map { |emb| normalize(emb) } end |