- Enhanced coding capabilities, esp. front-end & tool-calling
- Context length extended to 256k tokens
- Improved integration with various agent scaffolds
- Enhanced coding capabilities, esp. front-end & tool-calling
- Context length extended to 256k tokens
- Improved integration with various agent scaffolds
I tried to fix a bug while running 80-90% context size and it basically circled around the same ideas, no matter what I told it for an hour back and forth. As the context got too big, I started a new chat with the "auto" mode (by accident) and it one-shotted it.
- Coding is ok on medium size code bases
- Tool calling seems great, but I haven't tested it enough to tell
- I miss the variety of models before
- the price is a huge win for #openai - people don't yet understand the impact
- the router is annoying, but something others will adapt
www.reddit.com/r/LocalLLaMA... #AI #LLM #homelabai #localaiagent
www.reddit.com/r/LocalLLaMA... #AI #LLM #homelabai #localaiagent
ollama.com/library/deep...
ollama.com/library/deep...
gist.github.com/awni/ec071fd...
gist.github.com/awni/ec071fd...
youtu.be/NEi9oJbwZC4
youtu.be/NEi9oJbwZC4