Class: Leann::Embedding::Ollama

Inherits:
Base
  • Object
show all
Defined in:
lib/leann/embedding/ollama.rb

Overview

Ollama Embeddings API provider

Uses local Ollama server for computing embeddings. Requires Ollama to be running: ollama.com

Examples:

provider = Leann::Embedding::Ollama.new(model: "nomic-embed-text")
embeddings = provider.compute(["Hello", "World"])

Constant Summary collapse

DEFAULT_HOST =
"http://localhost:11434"
EMBED_PATH =
"/api/embed"
MAX_BATCH_SIZE =
32
TIMEOUT =
60
%w[
  nomic-embed-text
  mxbai-embed-large
  bge-m3
  all-minilm
  snowflake-arctic-embed
].freeze

Instance Attribute Summary

Attributes inherited from Base

#dimensions, #model

Instance Method Summary collapse

Methods inherited from Base

#compute_one

Constructor Details

#initialize(model: "nomic-embed-text", host: nil) ⇒ Ollama

Returns a new instance of Ollama.

Parameters:

  • model (String) (defaults to: "nomic-embed-text")

    Ollama embedding model name

  • host (String, nil) (defaults to: nil)

    Ollama server URL



36
37
38
39
40
41
42
43
# File 'lib/leann/embedding/ollama.rb', line 36

def initialize(model: "nomic-embed-text", host: nil)
  super(model: model)

  @host = host || Leann.configuration.ollama_host || ENV["OLLAMA_HOST"] || DEFAULT_HOST
  @dimensions = nil

  check_connection!
end

Instance Method Details

#compute(texts) ⇒ Array<Array<Float>>

Compute embeddings for texts

Parameters:

  • texts (Array<String>)

Returns:

  • (Array<Array<Float>>)


49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
# File 'lib/leann/embedding/ollama.rb', line 49

def compute(texts)
  return [] if texts.empty?

  all_embeddings = []

  in_batches(texts, MAX_BATCH_SIZE) do |batch|
    batch_embeddings = compute_batch(batch)
    all_embeddings.concat(batch_embeddings)
    print "." # Progress indicator
  end

  puts " Done! (#{all_embeddings.size} embeddings)" unless texts.size < MAX_BATCH_SIZE

  # Normalize embeddings (Ollama may not normalize by default)
  all_embeddings.map { |emb| normalize(emb) }
end