The Hybrid Group
hybridgroup.com
The Hybrid Group
@hybridgroup.com
We're the software company that makes your hardware work.

https://hybridgroup.com
Pinned
We're moving at the speed of thought, so yzma v1.0 beta2 is out!

Better, faster, and more benchmarks to show it too.

Run local models using Go with your CPU, CUDA, or Vulkan.

You know what to do!

github.com/hybridgroup/...

#golang #llama #llamacpp
GitHub - hybridgroup/yzma: Go for hardware accelerated local inference with llama.cpp directly integrated into your applications
Go for hardware accelerated local inference with llama.cpp directly integrated into your applications - hybridgroup/yzma
github.com
We're moving at the speed of thought, so yzma v1.0 beta2 is out!

Better, faster, and more benchmarks to show it too.

Run local models using Go with your CPU, CUDA, or Vulkan.

You know what to do!

github.com/hybridgroup/...

#golang #llama #llamacpp
GitHub - hybridgroup/yzma: Go for hardware accelerated local inference with llama.cpp directly integrated into your applications
Go for hardware accelerated local inference with llama.cpp directly integrated into your applications - hybridgroup/yzma
github.com
November 24, 2025 at 2:51 PM
yzma 1.0 beta1 is out!

Use Go for hardware accelerated local inference with llama.cpp directly integrated into your applications. No external model servers or CGo.

Go get it right now!

github.com/hybridgroup/...

#golang #llama #vlm #llm #local #gpu
GitHub - hybridgroup/yzma: Go for hardware accelerated local inference with llama.cpp directly integrated into your applications
Go for hardware accelerated local inference with llama.cpp directly integrated into your applications - hybridgroup/yzma
github.com
November 20, 2025 at 9:14 PM
Reposted by The Hybrid Group
Thanks to @deadprogram.com and his Yzma project, you don't need to deploy model servers anymore, you can run GGUF models directly in your #golang code.

I have cool examples, including a full RAG app using DuckDB. I will have more complex examples soon.

github.com/ardanlabs/ai...
ai-training/cmd/examples/example13 at main · ardanlabs/ai-training
Provide examples for Go developers to use AI in their products - ardanlabs/ai-training
github.com
November 16, 2025 at 3:36 PM
Reposted by The Hybrid Group
#BuriKaigi 2026 内で開催する TinyGo Keeb Tour (はんだ付け + ソフトなワークショップ) で使う zero-kb02 を準備していってる。みんな、はんだ付けしにきてね!今は TinyGo のファームウェアしかないけど、 Vial とか zmk とか prk でファームウェア書いてくれる人も募集中です。
#tinygo_keeb
November 14, 2025 at 12:21 AM
Reposted by The Hybrid Group
"Captions With Attitude" in your browser from your webcam generated by a Vision Language Model (VLM) from a Go program running entirely on your local machine using llama.cpp!

github.com/hybridgroup/...

#golang #vlm #openCV #llama #yzma
November 11, 2025 at 8:24 PM
Life's comes at you fast, and so do new releases of yzma!

Use pure Go for hardware accelerated local inference on Vision Language Models & Tiny Language Models.

0.9.0 out now with API improvements, model downloading, & more.

github.com/hybridgroup/...

#golang #llama #vlm #tlm
GitHub - hybridgroup/yzma: yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
November 7, 2025 at 4:59 PM
Reposted by The Hybrid Group
Ult. Software Design LIVE Schedule

Join @goinggo.net & @kenriquezcodes.bsky.social in this week's streams:
Tue. 11/4 & Thu.11/6 from 11am - 1pm EST

Tomorrow we'll have a special guest: @deadprogram.com

Stay tuned 1hr before the LIVE show for the stream link!😎
📽️Last episodes here: bit.ly/3CShDOS
November 3, 2025 at 8:31 PM
yzma 0.8.0 is out, now with over 87% coverage of the llama.cpp API from pure Go! More robust, more examples.

Go get it right now!

github.com/hybridgroup/...

#golang #llamacpp #vlm #slm #tlm
GitHub - hybridgroup/yzma: yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
November 3, 2025 at 10:38 AM
Reposted by The Hybrid Group
On October 12th, the first-ever TinyGo Conf happened in Tokyo, Japan. Report from team member Daniel Esteban aka "Conejo" tells all!

#tinygo #tinygoconf #golang #japan

madriguera.me/tinygo-conf-...
TinyGo Conf 2025 JAPAN
On October 12th, the first-ever TinyGo Conf happened in Tokyo, Japan. I planned to write about it sooner, but I did so many things during my trip that I didn't have the time nor strength to do it unti...
madriguera.me
October 28, 2025 at 6:04 PM
Reposted by The Hybrid Group
"That Machine Always Lies: Truth and Fiction in the Age of Artificial Intelligence"

thatmachinealwayslies.com
That Machine Always Lies
Truth and Fiction in the Age of Artificial Intelligence
thatmachinealwayslies.com
October 21, 2025 at 9:59 AM
Reposted by The Hybrid Group
I'm so excited about the YZMA project from @deadprogram.com. I've taken 3 of his examples and cleaned them up. Next step is to build a mini version of the Ollama service to show the real power of YZMA. #golang

github.com/ardanlabs/ai...
ai-training/cmd/examples/example13 at main · ardanlabs/ai-training
Provide examples for Go developers to use AI in their products - ardanlabs/ai-training
github.com
October 20, 2025 at 6:23 PM
Did we figure out how to use yzma to work on itself? We can't tell you that, but here is release 0.4 now with over 70% of llama.cpp functionality complete!

github.com/hybridgroup/...

#golang #ml #llamacpp #llama #vlm #llm #slm #tlm
GitHub - hybridgroup/yzma: yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go for local inference+embedding with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
October 14, 2025 at 12:43 PM
We just released yzma 0.3 with embedding support, bug fixes, and more.

Go get it right now!

github.com/hybridgroup/...

#golang #llama #llamacpp #ml #vlm #tlm #slm #llm
Release 0.3.0 · hybridgroup/yzma
What's Changed feature: return wrapped errors when loading functions by @deadprogram in #17 docs: add information on what llama.cpp features are already implemented by @deadprogram in #18 examples...
github.com
October 14, 2025 at 7:01 AM
Reposted by The Hybrid Group
#tinygo_conf
Thank you so much, @deadprogram.com !!!
(I added the caption, btw!)
October 12, 2025 at 2:01 AM
Reposted by The Hybrid Group
Already have an update for yzma with Windows support working & also simplified developer interface for loading. Go get it now!

github.com/hybridgroup/...
GitHub - hybridgroup/yzma: yzma lets you use Go to perform local inference with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go to perform local inference with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
October 9, 2025 at 12:56 PM
Reposted by The Hybrid Group
yzma is a new Go package for local inference with Vision Language Models (VLMs) & Large Language Models (LLMs) using llama.cpp without CGo.

github.com/hybridgroup/...

#golang #llamacpp #llm #vlm #slm #tlm
GitHub - hybridgroup/yzma: yzma lets you use Go to perform local inference with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo.
yzma lets you use Go to perform local inference with Vision Language Models (VLMs) and Large Language Models (LLMs) using llama.cpp without CGo. - hybridgroup/yzma
github.com
October 8, 2025 at 9:52 AM
Reposted by The Hybrid Group
Getting ready to setup for my @golab.io keynote "That Machine Always Lies".
#golab2025 #golab #golang #tinygo
October 6, 2025 at 2:26 PM
Reposted by The Hybrid Group
First speaker of the day at @golab.io is @danicat83.bsky.social with "The Gopher's Craft in the Age of AI"
#golab #golang
October 6, 2025 at 7:30 AM
Reposted by The Hybrid Group
Getting started here at the 10th edition of @golab.io with some opening remarks!
#golang #golab
October 6, 2025 at 7:22 AM
Reposted by The Hybrid Group
Now starting my journey to @golab.io in Florence. See you there!
#golab #golang #tinygo
October 4, 2025 at 7:00 AM
Reposted by The Hybrid Group
Join Ron Evans' talk and discover Truth and Fiction in the Age of Artificial Intelligence
#GoLab 2025 #Golang #AI @deadprogram.com
October 3, 2025 at 11:05 AM
Coming up in just 1 hour @deadprogram.com is on @opencv.bsky.social Live! about OpenCV using @golang.org & WebAssembly. Don't miss it!

YT: youtube.com/live/gq2MXxv...
Twitch: twitch.tv/opencvofficial
Zoom: opencv.live

#golang #computerVision #openCV #webassembly #wasm #tinygo #rust
OpenCV for Go & WebAssembly w/ Ron Evans
YouTube video by OpenCV
youtube.com
September 25, 2025 at 3:06 PM
Reposted by The Hybrid Group
I'm streaming tomorrow on OpenCV Live abount computer vision using Go + WebAssembly + Machine Learning!

9 AM PST/6 PM CET

YT: www.youtube.com/live/gq2MXxv...
Twitch: twitch.tv/opencvofficial
Zoom opencv.live

#golang #opencv #wasm #webassembly #rust #clang
OpenCV for Go & WebAssembly w/ Ron Evans
YouTube video by OpenCV
www.youtube.com
September 24, 2025 at 8:45 PM