Do I need an API key? Which one?
Yes — at least one. Anthropic for Claude, OpenAI for GPT-5, Google for Gemini, etc. Create a key at the provider's website (e.g. console.anthropic.com / platform.openai.com), paste it into Labaik's Keys modal, done. You can connect multiple providers and switch in a dropdown. Or skip keys entirely and run Ollama for fully-local, fully-offline.
Why BYO keys instead of a subscription?
Because Cursor charges $20/mo on top of what you already pay providers. You shouldn't pay twice. Labaik is just a desktop UI over the APIs you already have — no middleman, no markup, no data-mining. If you never open it, it costs zero.
Can I really use Chinese-lab models (Kimi, Qwen, GLM, MiniMax, Hunyuan)?
Yes — Labaik talks to them directly via their official OpenAI-compatible APIs. Create a key at the relevant platform (platform.kimi.ai / kimi.com, dashscope.aliyun.com, open.bigmodel.cn, platform.minimax.io, hunyuan.tencent.com), paste it in. No VPN needed from outside China if you use the kimi.ai endpoint.
Does Labaik send any data to a server you control?
No. There is no Labaik backend. Prompts go straight from your machine to whichever LLM provider you selected. The OODA event log stays on disk at ~/.labaik/events.ndjson.
Can I run it fully offline?
Yes — install Ollama, pull a model through the in-app catalog, and you're done. Tool calling works on capable local models (Qwen 3.6, Gemma 4 E4B+, Llama 3.3, DeepSeek R1). Tiny models have tool calling disabled on purpose — they can't format it reliably.
What's Crew?
Multi-model mode. Send one prompt and 2–4 models reply in parallel (e.g. Claude + GPT-5 + Gemini), each in their own lane. Pick the best reply or have them debate each other. Useful for consequential decisions where one model's bias isn't enough.
What's Skills?
Cron for AI. Schedule Labaik to run a prompt on a recurring schedule — "summarize HN at 8am," "draft my standup from git log at 6pm," "check prod status every 15 minutes." Results stream to a dedicated session you can scroll back through. Runs in-process (no cloud daemon), so your machine has to be awake.
How does memory work?
Two layers. Profile = always-on facts about you (name, role, preferences) injected into every turn. Memory = scoped, searchable with embeddings, filtered by workspace so your work stuff doesn't bleed into your personal stuff. Both persisted to ~/.labaik/*.json with atomic writes — survives every crash.
How are my API keys stored?
In ~/.labaik/credentials.json with file mode 0600. Only your user can read them. They never leave the machine except as outbound requests to the provider you picked.
What about the built-in browser — what can the agent do with it?
Open any URL, read the rendered page, click links, fill forms, type into inputs, take screenshots, and scroll. Useful for research ("summarize this Hacker News thread and save the links I should read later"), QA (navigate a deploy preview and report layout issues), or dull web-scraping tasks. The browser is containerised inside the app, not a popup that steals focus.
Is this a Cursor fork? Claude Code fork?
Neither. Different surface (desktop, not IDE), different audience (everyone, not just coders), different codebase. Labaik takes the lesson of those products — good tool loops are mostly prompt engineering — but is written from scratch, MIT licensed, source-available.
What's the license?
MIT. Source at github.com/alsayadi/alaude-desktop.