You can now search the web, do deep research, and let Jan use your browser.
Powered by Jan-v2-VL-Max, 30B model beating Gemini 2.5 Pro & DeepSeek R1 on execution benchmarks.
Try it: chat.jan.ai/
You can now search the web, do deep research, and let Jan use your browser.
Powered by Jan-v2-VL-Max, 30B model beating Gemini 2.5 Pro & DeepSeek R1 on execution benchmarks.
Try it: chat.jan.ai/
Jan now has its own Chromium extension that makes browser use simpler and more stable. You can install it from the Chrome Web Store and connect it from in Jan. The video above shows the quick steps.
Search for Jan Browser MCP on Chrome Web Extension.
Jan now has its own Chromium extension that makes browser use simpler and more stable. You can install it from the Chrome Web Store and connect it from in Jan. The video above shows the quick steps.
Search for Jan Browser MCP on Chrome Web Extension.
In v0.7.4, you can add a file into the chat and ask anything about it.
Update your Jan or download the latest.
In v0.7.4, you can add a file into the chat and ask anything about it.
Update your Jan or download the latest.
Qwen3-VL-8B-Thinking reaches 5, Qwen2.5-VL-7B-Instruct and Gemma-3-12B reach 2, and Llama-3.1-Nemotron-8B and GLM-4.1-V-9B-Thinking reach 1.
Models: huggingface.co/collections...
Qwen3-VL-8B-Thinking reaches 5, Qwen2.5-VL-7B-Instruct and Gemma-3-12B reach 2, and Llama-3.1-Nemotron-8B and GLM-4.1-V-9B-Thinking reach 1.
Models: huggingface.co/collections...
Jan-v2-VL executes 49 steps without failure, while the base model stops at 5 and other similar-scale VLMs stop between 1 and 2.
Models: huggingface.co/collections...
Credit to the Qwen team for Qwen3-VL-8B-Thinking!
Jan-v2-VL executes 49 steps without failure, while the base model stops at 5 and other similar-scale VLMs stop between 1 and 2.
Models: huggingface.co/collections...
Credit to the Qwen team for Qwen3-VL-8B-Thinking!
menlo.bamboohr.com/careers/109
menlo.bamboohr.com/careers/109
Find the GGUF model on Hugging Face, click "Use this model" and select Jan, or copy the model link and paste it into Jan Hub.
Thanks Qwen 🧡
Find the GGUF model on Hugging Face, click "Use this model" and select Jan, or copy the model link and paste it into Jan Hub.
Thanks Qwen 🧡
Go to Settings, Model Providers, add Ollama, and set the Base URL to http://localhost:11434/v1.
Your 🦙 Ollama models will then be ready to use in 👋 Jan.
Go to Settings, Model Providers, add Ollama, and set the Base URL to http://localhost:11434/v1.
Your 🦙 Ollama models will then be ready to use in 👋 Jan.