Free · MIT · macOS · v0.7.62

Every AI.
One desktop.

Claude, GPT, Gemini, Grok, Kimi, Qwen, GLM, MiniMax, Hunyuan — plus local models. One quiet desktop app for every AI you want to talk to. No lock-in, no subscription.

⌘K focus composer · ⌘N new session · Esc close
Labaik — Health Space · claude-sonnet-4-6
Analyze this CBC panel — WBC 12.4, HGB 10.1, PLT 420…
📝 write_file · lab-summary.md ✓ run_command 🌐 open_in_browser
Reference ranges (adult):
  • WBC 12.4 — slightly elevated (ref 4.0–11.0 ×10⁹/L)
  • HGB 10.1 — below range (ref 13.5–17.5 g/dL)
  • PLT 420 — upper-normal (ref 150–450 ×10⁹/L)
Informational only — not medical advice.
What you get

Ten capabilities — in one desktop app.

Everything here ships today. Free and open source.

No subscription $0 app fee. You pay only the providers you already have keys for — or pay nothing when you run local models.
10 providers Anthropic (Claude 4.7/4.6/Haiku 4.5), OpenAI (GPT-5.4/Thinking/Pro/5.3), Google (Gemini 3.1 Pro/Flash), xAI (Grok 4.20), Kimi (kimi.ai + kimi.com, K2.6), Alibaba Qwen 3.6 Max, Zhipu GLM-5, MiniMax M2.7, Tencent Hunyuan Hy3 Preview. One dropdown.
Local via Ollama Fully offline when you want. In-app catalog with one-click pulls for Qwen 3.6, Gemma 4, Llama 3.2/3.3, DeepSeek R1. Installed models appear alongside cloud ones in the same picker.
Crew — multi-model in parallel Send one prompt to 2–4 models at once. Each replies in its own lane. Pick the best, or have them debate each other. Useful when one model's bias isn't enough.
Skills — cron for AI Schedule Labaik to do things on its own. "Summarize HN at 8am." "Draft my standup from git log at 6pm." "Ping prod every 15 minutes." Results stream to a dedicated session.
Built-in browser The agent can open URLs, read pages, click buttons, fill forms, and take screenshots — all inside the app. Research, QA, and web-scraping without leaving chat.
Rich content display Real markdown, fenced code with copy buttons, unified diffs, charts, Mermaid, SVG, file previews, artifacts. Whatever the model returns, the app renders it right.
Persistent memory + profile "Remember this" button on any message. Profile = always-on facts about you. Memory = workspace-scoped, searchable with embeddings. Survives every crash (atomic file writes to ~/.labaik/).
Tool calling + MCP Six built-in workspace tools (read_file, write_file, list_directory, run_command, open_in_browser, start_dev_server) plus full MCP server support for anything else.
Privacy by construction Developer-ID signed with hardened runtime. No Labaik backend. API keys in ~/.labaik/credentials.json at mode 0600. Prompts go machine → provider directly. Zero telemetry sent to anyone.
What's inside

Everything a modern AI surface should be.

Six capability pillars. None of them bolted on — each one is load-bearing in the product.

☁️

10 cloud providers

Anthropic (Claude 4.7 / 4.6 / Haiku 4.5), OpenAI (GPT-5.4 + Thinking/Pro + 5.3), Google (Gemini 3.1 Pro / 3 Flash), xAI (Grok 4.20), Kimi (kimi.ai + kimi.com, K2.6), Qwen (3.6 Max Preview), GLM-5, MiniMax M2.7, Tencent Hunyuan Hy3. Your keys, in ~/.labaik/credentials.json at mode 0600.

🦙

Local models, first-class

In-app Ollama catalog with one-click pulls. Qwen 3.6, Gemma 4 (E2B/E4B/26B/31B), Llama 3.2/3.3, DeepSeek R1. Installed models land in the same dropdown as cloud ones.

👥

Crew — multi-model

Claude + GPT + Gemini arguing until they get it right. Send one prompt to 2–4 models in parallel, pick the best reply, or have them debate each other. No other desktop AI does this.

Skills — cron for AI

Schedule Labaik to do things. "Summarize Hacker News at 8am." "Draft my standup from git log at 6pm." "Ping prod every 15 min." Runs in the background, results stream to a dedicated session.

🧠

Memory that actually sticks

Profile (always-on facts) + scoped memory with embeddings. Workspace-aware. "Remember this" button on any message. Persisted to ~/.labaik/*.json — survives every crash.

🌐

Browser built in

The agent can open any URL, read pages, fill forms, click buttons, and take screenshots — all from inside the app. Research without leaving chat.

🛠️

Tool calling that works

Six workspace tools: read_file, write_file, list_directory, run_command, open_in_browser, start_dev_server. Plus MCP server support for anything else.

🔐

Privacy by construction

No Labaik backend. Prompts go machine → provider you picked. Telemetry stays on disk. Renderer ↔ Main isolated via contextBridge. No nodeIntegration. Developer-ID signed + hardened runtime.

Chat polish

Real markdown, fenced code with copy buttons, hover-to-copy, smart auto-scroll, voice input, drag-and-drop attachments (PDF, DOCX, XLSX, images), keyboard shortcuts, Cmd+F search across all sessions.

Models

Cloud when you need the horsepower.
Local when the prompt shouldn't leave.

Labaik is provider-agnostic. Switch in a dropdown. Tool calling is gated per model, so tiny models never get tools they can't format.

☁️ Cloud — each provider's current flagship

  • Anthropic · Claude Opus 4.7, Sonnet 4.6, Haiku 4.5
  • OpenAI · GPT-5.4 (+ Thinking, Pro), GPT-5.3
  • Google · Gemini 3.1 Pro, 3 Flash, 3.1 Flash-Lite
  • xAI · Grok 4.20 (reasoning + non-reasoning)
  • Kimi · K2.6 (256k) — both kimi.ai (Global) and kimi.com (CN)
  • Alibaba · Qwen 3.6 Max Preview (262k), Qwen 3 Coder Plus (1M)
  • Zhipu · GLM-5, GLM-5.1 (#1 on SWE-Bench Pro)
  • MiniMax · MiniMax-M2.7
  • Tencent · Hunyuan Hy3 Preview

🦙 Local via Ollama — fully offline

  • Qwen 3.6 · balanced general-purpose
  • Gemma 4 · E2B / E4B / 26B / 31B
  • Llama 3.2 / 3.3 · Meta's latest
  • DeepSeek R1 · reasoning distill
  • …and anything else with an Ollama tag.
One-click pulls with a progress bar. Cancel anytime. Installed models appear alongside cloud ones in the same dropdown. Tool calling is gated per model so tiny models never get tools they can't format.
Privacy

Your keys, your telemetry, your machine.

🔐

Credentials stay local

API keys at ~/.labaik/credentials.json, mode 0600. Memory, profile and session history at ~/.labaik/*.json. Nothing leaves your machine except outbound requests to the provider you picked.

📡

No Labaik backend

There is no Labaik server to talk to. Prompts go straight from your machine to whichever LLM provider you selected. The OODA event log stays on disk at ~/.labaik/events.ndjson. Zero telemetry sent to anyone.

🧯

Safe by construction

Developer-ID signed + hardened runtime. Renderer ↔ Main isolated via contextBridge. No nodeIntegration. Markdown renderer escapes HTML before transforms. Health Space carries a not-medical-advice reminder.

Download v0.7.62

Install Labaik

Developer-ID signed · hardened runtime · builds attached to every GitHub release. Windows and Linux builds coming soon.

First launch on macOS Sequoia (15+): the Intel DMG is fully notarized — double-click and it opens. The Apple Silicon DMG is Developer-ID signed but its notarization ticket is still processing at Apple; until it lands, macOS shows "Apple could not verify Labaik is free of malware." To open it once: click DoneSystem Settings → Privacy & Security → scroll to the Security section → click Open Anyway next to "Labaik was blocked" → confirm with Touch ID. After that, double-click works normally. Notarized builds auto-replace on this page once Apple finishes.
Prefer to run from source?
Bun or Node 18+ · Electron 33 · MIT
git clone [email protected]:alsayadi/alaude-desktop.git
cd alaude-desktop
bun install
bun start
FAQ

Questions people actually ask.

Do I need an API key? Which one?

Yes — at least one. Anthropic for Claude, OpenAI for GPT-5, Google for Gemini, etc. Create a key at the provider's website (e.g. console.anthropic.com / platform.openai.com), paste it into Labaik's Keys modal, done. You can connect multiple providers and switch in a dropdown. Or skip keys entirely and run Ollama for fully-local, fully-offline.

Why BYO keys instead of a subscription?

Because Cursor charges $20/mo on top of what you already pay providers. You shouldn't pay twice. Labaik is just a desktop UI over the APIs you already have — no middleman, no markup, no data-mining. If you never open it, it costs zero.

Can I really use Chinese-lab models (Kimi, Qwen, GLM, MiniMax, Hunyuan)?

Yes — Labaik talks to them directly via their official OpenAI-compatible APIs. Create a key at the relevant platform (platform.kimi.ai / kimi.com, dashscope.aliyun.com, open.bigmodel.cn, platform.minimax.io, hunyuan.tencent.com), paste it in. No VPN needed from outside China if you use the kimi.ai endpoint.

Does Labaik send any data to a server you control?

No. There is no Labaik backend. Prompts go straight from your machine to whichever LLM provider you selected. The OODA event log stays on disk at ~/.labaik/events.ndjson.

Can I run it fully offline?

Yes — install Ollama, pull a model through the in-app catalog, and you're done. Tool calling works on capable local models (Qwen 3.6, Gemma 4 E4B+, Llama 3.3, DeepSeek R1). Tiny models have tool calling disabled on purpose — they can't format it reliably.

What's Crew?

Multi-model mode. Send one prompt and 2–4 models reply in parallel (e.g. Claude + GPT-5 + Gemini), each in their own lane. Pick the best reply or have them debate each other. Useful for consequential decisions where one model's bias isn't enough.

What's Skills?

Cron for AI. Schedule Labaik to run a prompt on a recurring schedule — "summarize HN at 8am," "draft my standup from git log at 6pm," "check prod status every 15 minutes." Results stream to a dedicated session you can scroll back through. Runs in-process (no cloud daemon), so your machine has to be awake.

How does memory work?

Two layers. Profile = always-on facts about you (name, role, preferences) injected into every turn. Memory = scoped, searchable with embeddings, filtered by workspace so your work stuff doesn't bleed into your personal stuff. Both persisted to ~/.labaik/*.json with atomic writes — survives every crash.

How are my API keys stored?

In ~/.labaik/credentials.json with file mode 0600. Only your user can read them. They never leave the machine except as outbound requests to the provider you picked.

What about the built-in browser — what can the agent do with it?

Open any URL, read the rendered page, click links, fill forms, type into inputs, take screenshots, and scroll. Useful for research ("summarize this Hacker News thread and save the links I should read later"), QA (navigate a deploy preview and report layout issues), or dull web-scraping tasks. The browser is containerised inside the app, not a popup that steals focus.

Is this a Cursor fork? Claude Code fork?

Neither. Different surface (desktop, not IDE), different audience (everyone, not just coders), different codebase. Labaik takes the lesson of those products — good tool loops are mostly prompt engineering — but is written from scratch, MIT licensed, source-available.

What's the license?

MIT. Source at github.com/alsayadi/alaude-desktop.