v0.5.0 · Open Source · MIT Licensed · GitHub stars

Turn idle Macs into an
AI compute fleet

Your spare Mac has 36GB of RAM doing nothing. Run DeepSeek-R1 70B on the Studio, FLUX image gen on the MacBook, Qwen3-ASR transcription on the Mini — all through one endpoint. Fix that.

Get Started → See how it works
# Install (or upgrade to v0.5.0)
pip install ollama-herd --upgrade

# Start the router
herd

# On each device
herd-node

# That's it. Nodes discover the router via mDNS.
× Kubernetes
× Docker
× YAML
× Config files
× Cloud costs
× Manual load balancing
The Problem

You switched to local. Now you're stuck on one machine.

Sound familiar?

💰

Cloud API costs are bleeding you dry

You're running Aider, CrewAI, OpenClaw, or other AI agents. Cloud API bills hit hundreds a month and keep climbing. Every token costs money. Every request leaves your network.

💻

Local LLMs freed you — partially

You switched to Ollama on your Mac. Free, private, fast. But now you're constrained to a single device. Requests queue up behind each other. Larger models need more RAM than your laptop has. Agents stall waiting for inference.

Meanwhile, your other devices sit idle

Your Mac Studio with 256GB. Your old MacBook Air with 16GB. Your Mac Mini in the closet. All that memory and compute, doing nothing. Herd connects them all into one endpoint. Big models route to the machine with the most memory. Small models run on the lightweight device. Every machine contributes what it can.

Mac Studio 256GB llama3.3:70b + FLUX image gen
MacBook Pro 36GB qwen3.5:32b + Qwen3-ASR
MacBook Air 16GB llama3.3:8b
4
Model types routed
445
Tests passing
2
Commands to deploy
0
Config files needed
Features

Everything your fleet needs

Intelligent routing that gets smarter the longer it runs. Every component exists to serve one thing: getting the best response as fast as possible.

7-Signal Scoring Engine

Thermal state, memory fit, queue depth, latency history, role affinity, availability trend, and context fit. Every request goes to the best machine.

Auto-Retry & Fallbacks

Transparent retry on node failure before the first chunk. Client-specified fallback models. Holding queue when all nodes are busy.

🔌

Zero-Config Discovery

mDNS auto-discovery. Nodes find the router on the LAN automatically. No config files, no service registries, no manual IP addresses.

📈

Real-Time Dashboard

8-tab live dashboard with SSE. Fleet overview, trends, model insights, per-tag analytics, benchmarks, health, recommendations, and settings. Multimodal type badges and per-node capability matrix. All backed by SQLite.

💡

Adaptive Capacity Learning

168-slot behavioral model learns each device's weekly patterns. Meeting detection pauses inference when you're on a call.

🔒

Multi-Protocol API

OpenAI-compatible endpoints for chat, images, and transcription. Plus native Ollama format. Drop-in replacement for any existing client, framework, or agent pipeline.

🎨

Multimodal Routing

Route LLMs, embeddings, image generation, and speech-to-text across the fleet. Capability-aware — image requests only go to nodes with mflux, transcription only to nodes with Qwen3-ASR.

🧠

Thinking Model Support

Auto-detects chain-of-thought models like DeepSeek-R1 and inflates token budgets by 4×. Diagnostic headers show exactly how thinking tokens were spent.

📊

Smart Benchmark

Auto-discovers fleet capabilities, selects an optimal model mix to fill available memory, and benchmarks LLMs, embeddings, image gen, and STT together. Per-model and per-node charts.

💫

Dynamic Context Optimization

Measures actual token usage per model, recommends optimal context sizes, and auto-adjusts to reclaim wasted VRAM. Most models use under 5% of allocated context — Herd fixes that.

Multimodal

Beyond text inference

One fleet, four model types. Every request routes to a node with the right capabilities.

💬

LLM

Chat, completion, reasoning. Smart routing by memory fit and model size.

Llama 3.3, Qwen 3.5, DeepSeek-V3
🔎

Embeddings

Vector search and RAG pipelines. Route to nodes with embedding models loaded.

nomic-embed-text, mxbai-embed
🎨

Image Generation

Text-to-image via FLUX. OpenAI-compatible endpoint. Routes to nodes with mflux installed and GPU capacity.

FLUX.1 Schnell, FLUX.1 Dev
🎤

Speech-to-Text

Audio transcription routed to capable nodes. OpenAI Whisper-compatible endpoint.

Qwen3-ASR

7 signals. Every request.

The scoring engine evaluates every available node on 7 dimensions before routing. The system learns from every request and improves over time.

🌡️
Thermal
💾
Memory
📊
Queue
⏱️
Latency
🎯
Affinity
📈
Availability
🧠
Context
How It Works

Request flow

From client request to streamed response in milliseconds. Every step is traced, logged, and queryable.

1

Request arrives

Client hits any endpoint — chat completion, image generation, transcription, or embeddings. The request is normalized and routed by type.

2

Score & rank

The scoring engine eliminates unhealthy nodes, scores survivors on 7 signals, and selects the best. Fallback models are tried if the primary isn't available.

3

Queue & dispatch

The request enters a per-node:model queue with dynamic concurrency. The queue manager balances load and auto-rebalances if conditions change.

4

Stream & retry

The streaming proxy forwards to Ollama. If the node fails before the first chunk, auto-retry kicks in with a different node. Format conversion (SSE / NDJSON) is transparent.

5

Learn & trace

Every request is traced to SQLite. Latency data feeds back into the scoring engine. The fleet gets smarter with every request it serves.

Compatibility

Works with everything

One base_url change connects any framework. Ollama Herd is the orchestration layer, not a replacement.

Open WebUI
LangChain
CrewAI
OpenHands
AutoGen
Aider
Cline
Continue.dev
LlamaIndex
OpenClaw
LiteLLM
exo

Any client that supports a custom OpenAI or Ollama base URL works out of the box.
Beyond LLMs — also routes image generation (FLUX via mflux) and speech-to-text (Qwen3-ASR) to capable nodes.

The fleet that works while you sleep

Multimodal routing, smart benchmarking, and dynamic context optimization are shipped. Now we're building an agentic router — a fleet that doesn't just wait for requests, but generates its own work, learns your patterns, and uses idle compute proactively.

Multimodal routing (LLM + Image + STT + Embeddings)
Smart benchmark (multimodal)
Dynamic context optimization
Video generation + TTS routing
Pattern-driven model pre-warming
Agentic task decomposition

Your Mac fleet is an untapped AI platform

500 MacBooks with Apple Silicon. Tens of terabytes of unified memory. Sitting idle during meetings, after hours, and weekends. Ollama Herd turns your existing hardware into a private AI compute platform — LLM inference, image generation, transcription, and embeddings — at zero additional cost.

SSO, RBAC, audit logging, compliance dashboards, fleet management, and SLA support. Everything enterprises need to run a full AI stack on the hardware they already own.

Contact Us →
$0
Additional hardware cost
58%
Enterprise employees now on Macs
96%
CIOs expect Mac fleet growth
50-70%
Savings vs cloud API costs

Your hardware deserves
an orchestrator

Stop leaving compute on the table. Start herding.

pip install ollama-herd
View on GitHub →