def handle_info |> Enum.map :ok {:noreply
Elixir-native AI coding assistant

AI agents shouldn’t communicate through files on disk.

Your agent teams coordinate through JSON files, polled queues, and worktree copies. They can’t tell if teammates are alive or dead. They shred context when it gets long.

Loomkin runs on the BEAM — where agents are native processes that message in microseconds, share memory, and never lose context.

~19,000 LOC 100+ concurrent agents

You’ve seen this. You just didn’t know why.

Every AI agent team you’ve used has the same failure modes. They aren’t bugs — they’re symptoms of building actors on runtimes that don’t have them.

“Is this agent alive or dead?”

You spawn three agents. One goes quiet. Is it thinking? Did it crash? Is it waiting for you? The system can’t tell — “idle” and “dead” look identical. So the lead burns its context window playing detective instead of building your feature.

[lead] Graph-builder went idle twice but didn’t send a
completion message. The idle notifications came at 02:05:37
and 02:05:41. It’s likely it received the message and went
idle before fully processing it, or is now working on it.

[lead] I’ll wait a bit longer before checking again...

“Wait, how is it seeing the other agent’s changes?”

To avoid file conflicts, each agent gets its own full copy of the repo. Isolated, right? Until one agent half-finishes a function and another agent — supposedly sandboxed — starts failing from incomplete code it shouldn’t be able to see. Now the lead is debugging the isolation model instead of your code.

[lead] event-builder has partially modified manager.ex but hasn’t
finished implementing start_nervous_system/1. This is causing
a compile error that context-builder noticed.

[lead] Wait — context-builder said “the project no longer compiles”
but it’s in a separate worktree. How would it see event-builder’s
changes? Maybe the worktrees aren’t fully isolated as expected...

[lead] I should tell context-builder to ignore the error and proceed
with shutdown — it’s just a temporary issue from work in progress.

One agent per file, or everything breaks

Refactoring auth means touching the router, the controller, the context, and the tests. But you can’t assign two agents to router.ex — one would overwrite the other. So you serialize what should be parallel work, and a 10-minute task takes 30.

Your research got thrown in a shredder

The researcher spent 30 minutes mapping your codebase — file paths, edge cases, function signatures. Then its context window filled up. All of that became “investigated auth module, found 3 issues.” You paid for a deep analysis and got a sticky note.

The plan was wrong, but nobody can change it

The lead decomposed the task before anyone looked at the code. Agent 3 discovers the plan missed a critical dependency. But it can’t update the plan — it finishes its assignment and reports back. Three agents did work that gets thrown away.

No agent ever says “hey, check my work”

Agents work in total isolation. One writes questionable code, nobody catches it. You find out when tests fail at the end — after you’ve already paid for all the tokens. Real teams do code review. Agent teams can’t.

Same scenario, different outcome

Every row is a situation you’ve been in. The left column is what happened. The right column is what should have happened.

What happens when… Current agent frameworks Loomkin (OTP)
You need a new agent Wait 20–30 seconds for a new process to spin up <500ms — a GenServer starts in the same VM
An agent sends a message Written to a JSON file, polled by the recipient seconds later PubSub delivers in microseconds — confirmed in the sender’s process
An agent goes quiet Is it thinking? Crashed? Waiting? No way to tell — “idle” and “dead” look identical Process.alive? returns a boolean. Supervisors auto-restart crashed agents
Two agents need the same file Entire repo gets copied to separate worktrees. Merging is your problem Region-level locks — two agents edit different functions in the same file, in parallel
Context window fills up Old messages get summarized into one sentence. Details are gone forever Context Mesh — overflow goes to keeper processes. Full fidelity, queryable by any agent
An agent discovers the plan is wrong Finishes its assignment anyway. Reports back. Lead re-plans from scratch Living plans — any agent can create tasks, flag blockers, or propose revisions in real-time
You want agents to review each other Not possible — agents work in isolation, no mechanism for peer feedback Native review protocol — review gates on critical paths, real-time pair programming
An agent crashes mid-task Nobody notices. The agent just stops responding. Lead might figure it out eventually Supervisor restarts it in milliseconds. Context survives in keepers. Team gets notified
You want 10+ agents 3–5 is the practical limit before coordination overhead overwhelms the work 100+ lightweight BEAM processes — coordination is in-memory, not API calls

Memory that never dies

Other systems shred old context to make room. Loomkin offloads it to lightweight keeper processes — full fidelity, queryable by any agent, persisted to SQLite.

Traditional (Claude Code, Aider)
Single Agent Process Context Window: 128K tokens system_prompt 2K recent_messages 100K COMPACTED 20K ← lossy tools 6K Total preserved: 128K Total LOST: unbounded
Loomkin (Context Mesh)
Working Agent (GenServer) Context Window: 128K tokens system_prompt 2K current_task 80K keeper_index 1K ← pointers tools 6K Keeper: "auth-research" 45K full Keeper: "billing-deps" 30K full Keeper: "test-results" 25K full Total preserved: 228K Total LOST: ZERO

1,000 keepers on 500MB of RAM = 100 million tokens preserved simultaneously

What OTP actually gives you

<500ms
Agent spawn time
GenServer.start_link
18x
Cost savings
$0.25 vs $4.50 (10 agents)
100+
Concurrent agents
per BEAM node
0
Context lost
ever

Everything, on the BEAM

27 built-in tools, a persistent decision graph, and a LiveView workspace — supervised, fault-tolerant, and hot-reloadable.

Teams-First Runtime

Every session is a team of one that auto-scales. Solo agent to full swarm — no mode switch, no opt-in. The agent decides based on task complexity.

27+ Tools as Jido Actions

Files, search, shell, Git, LSP diagnostics, decision logging — each a supervised action that can crash independently without taking down the session.

Persistent Decision Graph

SQLite-backed DAG with 7 node types, typed edges, and confidence scores. Persists reasoning across sessions. Any agent can read and write to it.

LiveView Workspace

Streaming chat, interactive decision graph, diff viewer, team dashboard — all server-rendered over WebSocket. Zero JavaScript framework.

665+ Models, Mixed Per-Agent

Cheap models for grunt work, expensive models for judgment calls. Dynamic escalation: if a cheap model fails twice, auto-promote to the next tier.

Single Binary

Burrito-wrapped for macOS and Linux. No Erlang, no Elixir install. Download, set your API key, run.

Seven layers deep

A supervision tree from user input to model output, with fault isolation at every boundary. Agents, keepers, and tools are all GenServers under OTP supervisors.

Interfaces CLI · LiveView · MCP
Teams Layer Agent · Keeper · PubSub Mesh
Tool Layer 27 Jido Actions
Intelligence Decision Graph · Repo Map · AST
Protocol Layer MCP Client + Server · LSP
LLM Layer req_llm · 16+ providers
Telemetry Cost · Tokens · Latency

Up and running in minutes

Elixir 1.18+, an API key, and six commands.

~/ — zsh
$ git clone https://github.com/bleuropa/loomkin.git $ cd loomkin $ mix setup # deps + DB $ mix phx.server # Web UI on :4200 $ mix escript.build # Build CLI binary $ ./loomkin --project . "refactor the auth module"