Xuan Son Nguyen
@ngxson.hf.co
770 followers 140 following 79 posts
Software Engineer @ Hugging Face 🤗
Posts Media Videos Starter Packs
ngxson.hf.co
Very nice touch, Gmail 😅
ngxson.hf.co
Part 2 of my journey building a smart home! 🚀

In this part:
> ESPHome & custom component
> RF433 receiver & transmitter
> Hassio custom addon
ngxson.hf.co
Just published a new article on my blog 🏃‍♂️

Building My Smart Home - Part 1: Plan, Idea & Home Assistant

Check it out!
ngxson.hf.co
Kudos to Google and the llama.cpp team! 🤝

GGUF support for Gemma 270M right from day-0
ngxson.hf.co
Richy Mini and SmolLM3 are featured in Github's weekly news! 🚀 🚀
ngxson.hf.co
Gemma 3n has arrived in llama.cpp 👨‍🍳 🍰

Comes in 2 flavors: E2B and E4B (E means "effective/active parameters")
ngxson.hf.co
See you this Sunday at AI Plumbers conference: 2nd edition!

📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
👉 Register here: lu.ma/vqx423ct
ngxson.hf.co
✨✨ AIFoundry is bringing you the AI Plumbers Conference: 2nd edition — an open source meetup for low-level AI builders to dive deep into "the plumbing" of modern AI

📍 Where: GLS Event Campus Berlin, Kastanienallee 82 | 10435 Berlin
📅 When: June 15, 2025
👉 Register now: lu.ma/vqx423ct
ngxson.hf.co
Hugging Face Inference Endpoints now officially support deploying **vision** models via llama.cpp 👀 👀

Try it now: endpoints.huggingface.co/catalog
ngxson.hf.co
Real-time webcam demo with @huggingface.bsky.social SmolVLM and llama.cpp server.

All running locally on a Macbook M3
ngxson.hf.co
Although we have A100, H200, M3 Ultra, etc

Still can't match the power of that Casio FX 😆
ngxson.hf.co
llama.cpp vision support just got much better! 🚀

Traditionally, models with complicated chat template like MiniCPM-V or Gemma 3 requires a dedicated binary to run.

Now, you can use all supported models via a "llama-mtmd-cli" 🔥

(Only Qwen2VL is not yet supported)
ngxson.hf.co
Finally have time to write a blog post about ggml-easy! 😂

ggml-easy is a header-only wrapper for GGML, simplifies development with a cleaner API, easy debugging utilities, and native safetensors loading ✨ Great for rapid prototyping!
ngxson.hf.co
Someone at Google definitely had a lot of fun making this 😆

And if you don't know, it's available in "Starter apps" section on AI Studio. The app is called "Gemini 95"
ngxson.hf.co
Telling LLM memory requirement WITHOUT a calculator?

Just use your good old human brain 🧠 😎

Check out my 3‑step estimation 🚀
ngxson.hf.co
Google having a quite good sense of humor 😂

Joke aside, 1B model quantized to Q4 without performance degrading is sweet 🤏
ngxson.hf.co
Cooking a fun thing today, I can now load safetensors file directly to GGML without having to convert it to GGUF!

Why? Because this allow me to do experiments faster, especially with models outside of llama.cpp 😆
ngxson.hf.co
No vibe coding. Just code it ✅

Visit my website --> ngxson.com