Your spare Mac has 36GB of RAM doing nothing. Your main machine is bottlenecked running inference alone. Fix that.
Sound familiar?
You're running Aider, CrewAI, OpenClaw, or other AI agents. Cloud API bills hit hundreds a month and keep climbing. Every token costs money. Every request leaves your network.
You switched to Ollama on your Mac. Free, private, fast. But now you're constrained to a single device. Requests queue up behind each other. Larger models need more RAM than your laptop has. Agents stall waiting for inference.
Your Mac Studio with 96GB. Your old MacBook Air with 16GB. Your Mac Mini in the closet. All that memory and compute, doing nothing. Herd connects them all into one endpoint. Big models route to the machine with the most memory. Small models run on the lightweight device. Every machine contributes what it can.
Intelligent routing that gets smarter the longer it runs. Every component exists to serve one thing: getting the best response as fast as possible.
Thermal state, memory fit, queue depth, latency history, role affinity, availability trend, and context fit. Every request goes to the best machine.
Transparent retry on node failure before the first chunk. Client-specified fallback models. Holding queue when all nodes are busy.
mDNS auto-discovery. Nodes find the router on the LAN automatically. No config files, no service registries, no manual IP addresses.
5-tab live dashboard with SSE. Fleet overview, trends, model insights, per-app analytics, and benchmarks. All backed by SQLite.
168-slot behavioral model learns each device's weekly patterns. Meeting detection pauses inference when you're on a call.
Both OpenAI and Ollama format endpoints. Drop-in replacement for any existing client, framework, or agent pipeline.
The scoring engine evaluates every available node on 7 dimensions before routing. The system learns from every request and improves over time.
From client request to streamed response in milliseconds. Every step is traced, logged, and queryable.
Client hits the OpenAI-compatible or Ollama-compatible endpoint. The request is normalized into a unified format.
The scoring engine eliminates unhealthy nodes, scores survivors on 7 signals, and selects the best. Fallback models are tried if the primary isn't available.
The request enters a per-node:model queue with dynamic concurrency. The queue manager balances load and auto-rebalances if conditions change.
The streaming proxy forwards to Ollama. If the node fails before the first chunk, auto-retry kicks in with a different node. Format conversion (SSE / NDJSON) is transparent.
Every request is traced to SQLite. Latency data feeds back into the scoring engine. The fleet gets smarter with every request it serves.
One base_url change connects any framework. Ollama Herd is the orchestration layer, not a replacement.
Any client that supports a custom OpenAI or Ollama base URL works out of the box.
We're building an agentic router — a fleet that doesn't just wait for requests, but generates its own work, learns your patterns, and uses idle compute proactively.
500 MacBooks with Apple Silicon. Tens of terabytes of unified memory. Sitting idle during meetings, after hours, and weekends. Ollama Herd turns your existing hardware into a private AI inference cluster — zero additional cost.
SSO, RBAC, audit logging, compliance dashboards, fleet management, and SLA support. Everything enterprises need to run AI inference on the hardware they already own.
Contact Us →Stop leaving compute on the table. Start herding.