Jan
jandotai.bsky.social
Jan
@jandotai.bsky.social
Open source ChatGPT-alternative that runs 100% offline.
December 23, 2025 at 1:08 PM
Introducing chat.jan.ai, a brand new way to use Jan 💛

You can now search the web, do deep research, and let Jan use your browser.

Powered by Jan-v2-VL-Max, 30B model beating Gemini 2.5 Pro & DeepSeek R1 on execution benchmarks.

Try it: chat.jan.ai/
Jan - Open Source AI
chat.jan.ai
December 22, 2025 at 10:22 AM
Jan Browser MCP is now available 🩵

Jan now has its own Chromium extension that makes browser use simpler and more stable. You can install it from the Chrome Web Store and connect it from in Jan. The video above shows the quick steps.

Search for Jan Browser MCP on Chrome Web Extension.
December 9, 2025 at 10:31 AM
Jan now supports Flatpak 🩵 flathub.org/en/apps/ai....
Install Jan on Linux | Flathub
Private offline AI assistant
flathub.org
December 8, 2025 at 12:58 PM
Jan v0.7.5 is live. We found a model-import issue on Windows and fixed it. Update your Jan, the bug should be gone now.
December 8, 2025 at 10:16 AM
You can now attach files in Jan.

In v0.7.4, you can add a file into the chat and ask anything about it.

Update your Jan or download the latest.
December 5, 2025 at 10:34 AM
What was the first AI model you ever ran locally?
November 25, 2025 at 3:59 AM
Jan takes care of your to-dos.
November 24, 2025 at 7:47 AM
You can create, update, complete, or delete your Todoist tasks in Jan. So your AI can actually help you get things done.
November 24, 2025 at 7:24 AM
You can run Jan-v2-VL on MLX now. Huge thanks to the amazing MLX community for making this happen 🧡 huggingface.co/mlx-communi...
mlx-community/Jan-v2-VL-high-bf16-mlx · Hugging Face
huggingface.co
November 21, 2025 at 6:50 AM
Jan-v2-VL-high reaches 49 steps on the Long-Horizon Execution benchmark.

Qwen3-VL-8B-Thinking reaches 5, Qwen2.5-VL-7B-Instruct and Gemma-3-12B reach 2, and Llama-3.1-Nemotron-8B and GLM-4.1-V-9B-Thinking reach 1.

Models: huggingface.co/collections...
November 14, 2025 at 7:10 AM
Introducing Jan-v2-VL, a multimodal agent built for long-horizon tasks.

Jan-v2-VL executes 49 steps without failure, while the base model stops at 5 and other similar-scale VLMs stop between 1 and 2.

Models: huggingface.co/collections...

Credit to the Qwen team for Qwen3-VL-8B-Thinking!
November 13, 2025 at 10:29 AM
👀
November 12, 2025 at 7:18 AM
What is your open-source AI stack for coding?
November 10, 2025 at 7:00 AM
You can run Kimi-K2-Thinking, the most strongest agentic model, in Jan through Hugging Face. Add it to your Hugging Face Inference models to use it.
November 7, 2025 at 6:50 AM
We're looking for someone who can take full ownership of Jan's growth 🩵
menlo.bamboohr.com/careers/109
menlo.bamboohr.com
November 6, 2025 at 4:58 AM
What's your go-to open-source model now?
November 4, 2025 at 9:00 AM
What's the highest number of tools your models could use in Jan?
November 4, 2025 at 3:45 AM
You can now use Qwen3-VL in Jan.

Find the GGUF model on Hugging Face, click "Use this model" and select Jan, or copy the model link and paste it into Jan Hub.

Thanks Qwen 🧡
November 3, 2025 at 10:28 AM
llama.cpp in Jan has been updated. Tap "Update Now", or update it from Settings. Thanks to GGML and the open-source community 💙
November 3, 2025 at 8:13 AM
ANNOUNCEMENT | We've seen multiple tokens appearing under the name "Jan," some using our branding and visuals without authorization. These projects have no link to us. Please be cautious and trust only our official accounts for verified information.
October 31, 2025 at 3:39 AM
Run gpt-oss-safeguard-20b in Jan via Groq with 1k+ tokens/sec ⚡️
October 29, 2025 at 12:42 PM
Your Ollama models can use MCP servers in Jan.
October 29, 2025 at 8:01 AM
You can now run Ollama models in Jan.

Go to Settings, Model Providers, add Ollama, and set the Base URL to http://localhost:11434/v1.

Your 🦙 Ollama models will then be ready to use in 👋 Jan.
October 29, 2025 at 6:25 AM
Jan can now search the web using Exa. Turn it on from the MCP section.
October 28, 2025 at 10:18 AM