magnesit
banner
magnesit.bsky.social
magnesit
@magnesit.bsky.social
Fan of privacy | thinks people should care less about how others wanna live | has love-hate-relationship with IPv6 | advocate of using AI in a reasonable way.
Update, support for the model improved with newer versions of llama.cpp and hits >60 t/s decode speed now.

I still don't believe this thing will run well on a phone though.
December 29, 2025 at 9:43 AM
Doubling the number of active parameters but cutting the number of experts in half feels arbitrary and has not shown an improvement in output quality so far. The model runs slower though.

We'll have to see what Magistral can make out of it.
December 20, 2025 at 10:44 AM
They did: mistral.ai/news/mistral-3

3B, 8B, 14B, 675B.
Introducing Mistral 3 | Mistral AI
A family of frontier open-source multimodal models
mistral.ai
December 20, 2025 at 10:41 AM
I'm glad the RL slop has been reduced in the Ministral series of models. The models don't perform absolutely SOTA, but at least they do not seem to spam \boxed for every single problem now.

Their Magistral pipeline could probably continue to scale extremely well in the future though.
December 20, 2025 at 10:38 AM
With enough modifications made to it, I'm certain Mistral could create an entirely new high-performance LLM from it.

They could also try to begin tackling problems like hallucinations with their own research, one of the most pressing issues in LLMs.
December 20, 2025 at 10:38 AM
What Mistral needs to replicate is the quality of ML research; there is little use in training a European LLM just for it to perform worse than the counterparts it took it's technologies from.

Furthermore, efforts in reverse-engineering GPT-OSS seem to be worth a shot.
December 20, 2025 at 10:38 AM
Meta will probably be making a much bigger comeback with their future Llama-series LLMs than Google did with Gemini 3, at least towards the technical community.

My stance on OpenAI for building the most "reliable" LLMs prevails. They just work most of the time.
December 20, 2025 at 10:29 AM
I genuinely thought Google could do better. They have an absurdly good vertical integration. They are literally the perfect candidate for building LLMs due to their sheer dataset size, compute capability and top-tier researchers.
December 20, 2025 at 10:29 AM
The model still makes such stupid mistakes I do not find myself using it at all anymore, not even for Nano Banana Pro whose capabilities have been severely over-hyped.

Nothing might come close to Gemini 3 in helping the AI bubble pop.
December 20, 2025 at 10:26 AM
THEY FUCKING FIXED IT
November 22, 2025 at 11:06 PM
Yep, that's what I meant to say - device support will come eventually, but Android 16 QPR1 releasing doesn't go hand in hand with that immediately. :)
November 17, 2025 at 10:19 PM
You will likely need to wait for a bit longer than for the generic QPR1 release until everything regarding the Pixel 10 is sorted out. I might be wrong though.
November 17, 2025 at 3:35 PM
As far as I am aware, Android 16 QPR1 being ported doesn't necessarily mean Pixel 10 device support is coming. Google dropped the AOSP device source trees for their new phones, which not only made the jump to Android 16 harder but also hindered Pixel 10 adoption.
November 17, 2025 at 3:35 PM
Still the case btw
November 2, 2025 at 7:58 PM
Still the case btw
October 19, 2025 at 9:22 AM
Are there any updates from Google on when Android 16 QPR1 will be released to AOSP? Haven't heard any updates from Google in a long time... 😬
October 2, 2025 at 11:12 AM
This is so cool to see! Any plans on when this will allow us to transfer chats between Android and iOS? That'd be the BEST feature in a helluva long time 🙏.

Besides polls in chats.
September 9, 2025 at 11:23 AM
"AOSP is not going anywhere", they said.

"Sideloading isn't going anywhere", they said.
September 7, 2025 at 5:37 PM