Axel 👨‍💻 Developer
banner
axelgarciak.bsky.social
Axel 👨‍💻 Developer
@axelgarciak.bsky.social
6.2K followers 2.6K following 680 posts
👨‍💻 Software Engineer 💾 Software minimalist/retro 🤖 AI tinkerer 🏗️ Building tech communities 🇪🇺 UK 🇬🇧🇩🇪🇻🇪 | Check bio: axelgarciak.com/bio
Posts Media Videos Starter Packs
Qwen3-next-80B-A3B 👀

Only 3B active parameters and almost as good (in benchmarks) as Qwen3-235B-A22B and Qwen3-32B.
Gemma 3 270M (Million not Billion) released.

I keep a close-eye to small models, and this one is a great win.

I've seen some tests and it is clever enough despite its size, but the main purpose is to fine-tune it to do specialized tasks.
Qwen3-coder seems to be great 👀
So many great announcements at Google I/O that is hard to put on a list.

I like Gemma 3n. They are decent lightweight models to run offline on smartphones.

They have 5B and 8B raw parameters but are comparable with the memory footprint of 2B to 4B models!
Many good releases from open and closed-source providers. I'm eagerly waiting for what DeepSeek is going to release.
Companies should release their AI models/LLM as soon as possible.

That way they don't have to compare themselves to Qwen3 or DeepSeek r2 and distill versions.

The more they wait, the more embarrassing it will be when they don't compare their model to those two. 😅
They are good! Obviously not what you get from models like O4-mini, Claude 3.7 or Gemini 2.5 Pro.

But they are great for local use cases.

It's nice that they are hybrid, i.e: thinking can be deactivated.

Qwen2.5 was already the best in its class for a while! Hopefully we'll get a Coder fine-tune!
Qwen3 is the ultimate GPU-Poor LLM!

Qwen3 4B and 8B are good for many use cases. Even Qwen3 0.6B seems coherent.

Qwen3-30B-A3 can be run with 4GB VRAM with enough system RAM.

I ran it on 4GB VRAM and got 12tok/s!
Yeah benchmarks don't translate fully to real use cases. Claude is still king for coding.
Of course benchmarks will differ from real life use cases.

However, previous Gemma models were quite good relative to their size, so I'm sure Gemma 3 is really good!
Good to see smaller LLMs pushing the pareto frontier of size vs performance.

Gemma 3 27B by Google DeepMind was released with performance between DeepSeek V3 and R1.

OlympicCoder 7B by Hugging Face with better performance than Claude 3.7 in Olympiad-Level Programming Problems.
It has been happening for a while before ChatGPT came out, but you're right in that it will accelerate.
It is useful indeed, I use it every day, but it's far from being AGI or ASI.
The debate around AI is polarized: either it's taking our jobs tomorrow or it's just hype.

Anyone who's actually used AI for longer than a few minutes knows the truth is far less sensational.
Debian or Fedora if you want mainstream distros.
I like that AMD has technologies that are more widely applicable, such as FSR, but as usual NVIDIA's is on the lead with the DLSS 4 technology.
Yeah frame interpolation is also cool. The key difference is that interpolation is reactive (analyzing past frames), DLSS is predictive (anticipating future frames).
One of the coolest AI use cases I've read about recently is DLSS 4 for NVIDIA GPUs.

DLSS stands for Deep Learning Super Sampling.

Games use DLSS AI to predict multiple frames and improve image quality/upscaling.

That allows lower-spec GPUs to play at higher resolutions/FPS!
Gotta account for those days when the OpenAI API is down 😏
Made-up tech news headline #1:

Tech CEO Replaces Dev Team with AI, Only to Be Ousted by Ex-Employee's AI-Powered SAAS One Month Later.
Interesting text-to-video model preserving transparency: TransPixar.
Agreed. To their credit, these newer GPUs have fp4 support whereas before you had to run at fp8 even when running fp4 calculations.

If you were to use fp4 exclusively, there is an advantage in speed from these newer GPUs.

If you use fp8 there is still an improvement but far less than advertised.