#homelabai
Great comparison of local LLMs and their performance on consumer grade cards (24GB RAM limit):
www.reddit.com/r/LocalLLaMA... #AI #LLM #homelabai #localaiagent
From the LocalLLaMA community on Reddit: I benchmarked (almost) every model that can fit in 24GB VRAM (Qwens, R1 distils, Mistrals, even Llama 70b gguf)
Explore this post and more from the LocalLLaMA community
www.reddit.com
January 24, 2025 at 2:58 PM