lex-llm-mlx
LegionIO LLM provider extension for MLX-backed OpenAI-compatible servers on Apple Silicon.
This gem lives under Legion::Extensions::Llm::Mlx and depends on lex-llm for shared provider-neutral routing, fleet, and schema primitives.
What It Provides
Legion::Extensions::Llm::Mlx::Provider, registered as:mlx.- OpenAI-compatible chat, streaming, model listing, and embeddings endpoint wrappers.
- Local-first defaults for MLX servers running on MacBook, Mac Studio, or local Apple Silicon hosts.
- Shared Legion settings, JSON, and logging dependencies.
Default Settings
Legion::Extensions::Llm::Mlx.default_settings
Defaults target http://localhost:8000, mark the provider as :local, and allow one concurrent local request. Fleet participation stays disabled unless the host opts in through Legion::Settings.
Configuration
LexLLM.configure do |config|
config.mlx_api_base = 'http://localhost:8000'
config.mlx_api_key = ENV['MLX_API_KEY']
end
mlx_api_key is optional because most local MLX servers run without authentication. Set it when a proxy or hosted MLX gateway requires bearer authentication.
Endpoint Helpers
completion_urlandstream_url:/v1/chat/completionsmodels_url:/v1/modelsembedding_url:/v1/embeddingshealth_url:/health
The provider uses the shared LexLLM::Provider::OpenAICompatible adapter so Legion routing can treat MLX, vLLM, OpenAI, and other compatible servers consistently while preserving provider-specific settings and health behavior.