Msty
@msty.app
Use online and offline LLM models without a headache. https://msty.app
By @ashokgelal.bsky.social & @nikeshparajuli.bsky.social
🇺🇸
By @ashokgelal.bsky.social & @nikeshparajuli.bsky.social
🇺🇸
Oh dang! Sorry yes, this is STT not TTS 🤦♂️ Sunday evening; I should be taking a break. Sorry about that. TTS isn't possible and is NOT on our immediate roadmap.
March 30, 2025 at 9:36 PM
Oh dang! Sorry yes, this is STT not TTS 🤦♂️ Sunday evening; I should be taking a break. Sorry about that. TTS isn't possible and is NOT on our immediate roadmap.
✅ Show collapsing UI for thinking models from @openrouter.bsky.social
✅ See thinking time as well as copy the text
✅ Simplified Chinese
✅ Russian
✅ Improved LaTex
✅ Better model downloading progress UI
And more!
msty.app/changelog
✅ See thinking time as well as copy the text
✅ Simplified Chinese
✅ Russian
✅ Improved LaTex
✅ Better model downloading progress UI
And more!
msty.app/changelog
Latest Msty Changelog
AI beyond just plain chat. Private, Offline, Split chats, Branching, Concurrent chats, Web Search, RAG, Prompts Library, Vapor Mode, and more. Perfect LM Studio, Jan AI, and Perplexity alternative. Us...
msty.app
January 30, 2025 at 3:10 PM
✅ Show collapsing UI for thinking models from @openrouter.bsky.social
✅ See thinking time as well as copy the text
✅ Simplified Chinese
✅ Russian
✅ Improved LaTex
✅ Better model downloading progress UI
And more!
msty.app/changelog
✅ See thinking time as well as copy the text
✅ Simplified Chinese
✅ Russian
✅ Improved LaTex
✅ Better model downloading progress UI
And more!
msty.app/changelog
Tinyllama is really a bad model and hallucinates a lot. It is only for running quick tests of any. We suggest using a Gemma model or Mistral Nemo. Mistral Nemo is actually very very good.
January 24, 2025 at 4:16 PM
Tinyllama is really a bad model and hallucinates a lot. It is only for running quick tests of any. We suggest using a Gemma model or Mistral Nemo. Mistral Nemo is actually very very good.
This year felt like a discovery phase for us—we’ve learned so much about what works, what doesn’t, what users need, and where AI is headed. With this clarity, our team is excited to build even more incredible features with you all in the year ahead. See you in 2025!
January 1, 2025 at 2:31 AM
This year felt like a discovery phase for us—we’ve learned so much about what works, what doesn’t, what users need, and where AI is headed. With this clarity, our team is excited to build even more incredible features with you all in the year ahead. See you in 2025!
2024 has been one of the best years. In less than 9 months we achieved so much. However, as amazing as this year was, we truly believe Msty in 2025 will eclipse this year by a large margin.
January 1, 2025 at 2:31 AM
2024 has been one of the best years. In less than 9 months we achieved so much. However, as amazing as this year was, we truly believe Msty in 2025 will eclipse this year by a large margin.
There is more! Check out the full changelog here: msty.app/changelog
Merry Christmas and Happy Holidays to you all! Thanks for a wonderful year and we're looking forward to seeing you again next year, which is going to be even more amazing!
Merry Christmas and Happy Holidays to you all! Thanks for a wonderful year and we're looking forward to seeing you again next year, which is going to be even more amazing!
Latest Msty Changelog
AI beyond just plain chat. Private, Offline, Split chats, Branching, Concurrent chats, Web Search, RAG, Prompts Library, Vapor Mode, and more. Perfect LM Studio, Jan AI, and Perplexity alternative. Us...
msty.app
December 24, 2024 at 8:59 PM
There is more! Check out the full changelog here: msty.app/changelog
Merry Christmas and Happy Holidays to you all! Thanks for a wonderful year and we're looking forward to seeing you again next year, which is going to be even more amazing!
Merry Christmas and Happy Holidays to you all! Thanks for a wonderful year and we're looking forward to seeing you again next year, which is going to be even more amazing!
New: Prompt Caching with Claude models (Beta)
New: Korean language support (work in progress)
New: Gemini models support
New: KV cache quant enabled in models
New: Support for Cohere AI
New: Korean language support (work in progress)
New: Gemini models support
New: KV cache quant enabled in models
New: Support for Cohere AI
December 24, 2024 at 8:59 PM
New: Prompt Caching with Claude models (Beta)
New: Korean language support (work in progress)
New: Gemini models support
New: KV cache quant enabled in models
New: Support for Cohere AI
New: Korean language support (work in progress)
New: Gemini models support
New: KV cache quant enabled in models
New: Support for Cohere AI
New: Model compatibility gauge for downloadable models
New: Remote embedding models support
New: Bookmark chats and messages (Aurum Perk)
New: Network Proxy Configuration (Beta)
New: Local AI Models (Llama 3.3, Llama 3.2 Vision, QwQ, etc)
New: Remote embedding models support
New: Bookmark chats and messages (Aurum Perk)
New: Network Proxy Configuration (Beta)
New: Local AI Models (Llama 3.3, Llama 3.2 Vision, QwQ, etc)
December 24, 2024 at 8:59 PM
New: Model compatibility gauge for downloadable models
New: Remote embedding models support
New: Bookmark chats and messages (Aurum Perk)
New: Network Proxy Configuration (Beta)
New: Local AI Models (Llama 3.3, Llama 3.2 Vision, QwQ, etc)
New: Remote embedding models support
New: Bookmark chats and messages (Aurum Perk)
New: Network Proxy Configuration (Beta)
New: Local AI Models (Llama 3.3, Llama 3.2 Vision, QwQ, etc)