Working on lagrange(something like devin but it is actually working)
They were able to get sonnet-level performance with less data than llama 3.3 70B
Does that mean scaling isn’t over? (if we can just be more efficient) Also, does that mean we can train a LLM fully on properly licensed content?
They were able to get sonnet-level performance with less data than llama 3.3 70B
Does that mean scaling isn’t over? (if we can just be more efficient) Also, does that mean we can train a LLM fully on properly licensed content?
Become a summer neuroAI intern at CSHL!
www.schooljobs.com/careers/cshl...
Become a summer neuroAI intern at CSHL!
www.schooljobs.com/careers/cshl...
Lagrange is an AI software engineer - like Devin, but it works amazingly.
We're giving it for alpha testing. If you code, make projects, etc., dm me or tell me to dm you.
Some projects made by Lagrange without us writing a single line of code:
Lagrange is an AI software engineer - like Devin, but it works amazingly.
We're giving it for alpha testing. If you code, make projects, etc., dm me or tell me to dm you.
Some projects made by Lagrange without us writing a single line of code:
And the visuals of 2001: space odyssey in 1968 are inexplicably incredible.
And the visuals of 2001: space odyssey in 1968 are inexplicably incredible.
If we really want to make AI "understand" things, giving it an internal framework to think about things should be the most obvious step from the very start. (1/n)
If we really want to make AI "understand" things, giving it an internal framework to think about things should be the most obvious step from the very start. (1/n)