coreling Download
v1.0 — Now available

Local AI, orchestrated.

Run multiple local LLMs in parallel. Coreling divides tasks, manages memory, and unifies your AI workflow — without a single byte leaving your machine.

Download for Mac Star on GitHub
coreling — zsh

Compatible with Ollama · LM Studio · llama.cpp


OllamaLM Studiollama.cppGemma 3Llama 3.2MistralPhi-3Qwen 2.5OllamaLM Studiollama.cppGemma 3Llama 3.2MistralPhi-3Qwen 2.5

Capabilities

Everything local AI
was missing.

Orchestration

Task Delegation

Coreling's router analyzes complexity, domain, and compute — then assigns each subtask to the best local model. Zero config.

Memory

Unified Memory

A persistent local vector store enriches every session. Context survives restarts, model switches, and project changes.

Privacy

100% Local

Zero telemetry. Zero cloud calls. Every prompt, completion, and memory artifact stays on-device. Fully air-gap compatible.

Multi-model

Model Mesh

Run Gemma 3, Llama 3.2, Mistral, and Phi-3 in parallel as a single unified agent — not isolated chatbots.

Performance

Sub-50ms Routing

The Coreling orchestrator adds less than 50 ms overhead. Multi-agent intelligence at native local inference speed.

Transparent

Plain-text Storage

Task history, memory graphs, and configs stored as SQLite + JSON on disk. Export, inspect, or version-control everything.

Architecture

One input.
Many models. One output.

01

Input

Natural language task

02

Task Splitter

Decomposes into subtasks

03

Local LLMs

Best model per subtask

04

Memory DB

Context written locally

05

Output

Unified coherent result

Zero cloud. Zero exposure.

No account. No telemetry. No model provider reading your prompts. Coreling was built air-gap-first — your data stays exactly where it belongs.

Air-gap compatible

Open Source

Free forever.

Proudly Open Source

Coreling is and will always be free for everyone. Our mission is to democratize local AI orchestration.

  • Unlimited local runs
  • Multi-model orchestration
  • Private vector memory
  • CLI & REST API
View on GitHub
If you want to support development, consider buying us a coffee or contributing code.

Your machine.
Your AI stack.

Stop sending your most sensitive work to someone else's server. Multi-agent AI running entirely on your hardware.

Download for Mac Star on GitHub