LLM Talk
learn-llm.bsky.social
LLM Talk
@learn-llm.bsky.social
Building Local LLMs - Follow for tips, and general talk about AI and LLMs. Will mostly share what I'm working on here, about my projects and findings

#Python #LLM #AI
"This is what fine-tuning does to smaller models. They make the tiny, weaker models solve specific problems better than the giants, which claim to do everything under the sun."

#LLM #devs #GPT

towardsdatascience.com/i-fine-tuned...
I Fine-Tuned the Tiny Llama 3.2 1B to Replace GPT-4o
Is the fine-tuning effort worth more than few-shot prompting?
towardsdatascience.com
November 24, 2024 at 5:47 PM
Few of the best uses of RAG

✅ Talk your API docs
✅ Convert your docs/pdfs for analysis
✅ Let your customers find info from chatbot
November 24, 2024 at 5:23 PM
Great read on leveraging RAGs in real-world scenarios where the data being retrieved likely spans thousands of entries.

Source: By Frank Wittkampf on medium, give him a follow
towardsdatascience.com/spoiler-aler...

#Devs #AI #LLM #RAG
November 18, 2024 at 7:28 AM
RAGs in AGI circles are like "Hello World" programs. Easy to start with them but it gets very tricky when you start building a real-world live app for real users.

From unpredictable nature of your user's questions to find an ideal way to save your context in vectors, it gets interesting with time
November 18, 2024 at 5:23 AM
Tech Stack for building next-gen AGI apps: RAGs, chatbots, Code generation tools etc

#LLM #AGI #AI #OpenAI #Python
November 16, 2024 at 1:09 PM
One of the best way to save time when generating text using an LLM, is reducing max-tokens setting/config in the LLM. That'll cut-short the generating and you can parse out the incomplete part and render only the completed first half.
November 15, 2024 at 6:01 PM
FAISS vs Chroma 🧵

If you've built RAG for retrieval or currently building one, you've likely used either one those to get started.

Both are great but serve different needs. Here's a quick breakdown to help you decide: 👇 1/5

#LLM #RAG #Llama #OpenAI
November 14, 2024 at 7:29 PM
First post!!!

Finished my newest RAG over a 100K+ dataset, and it was an experience. Some challengers:

1. Your typical vector stores like Chroma and FAISS comes with query limitations,
2. When you want to extract specific things like numerical criteria out of "query" 1/n

#Python #RAG #LLM #Llama
November 14, 2024 at 7:16 PM