Instead of embeddings/density clusters that miss nuance, 𝗱𝘂𝗸𝘁𝗿 uses 𝗟𝗟𝗠 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 to deduce subtle concepts & connect the dots. Minimal dependencies. Supports OpenAI/Gemini/HuggingFace or any LLM of your choice.
Instead of embeddings/density clusters that miss nuance, 𝗱𝘂𝗸𝘁𝗿 uses 𝗟𝗟𝗠 𝗿𝗲𝗮𝘀𝗼𝗻𝗶𝗻𝗴 to deduce subtle concepts & connect the dots. Minimal dependencies. Supports OpenAI/Gemini/HuggingFace or any LLM of your choice.
No problem. Exactly. I think that example is misleading and shouldn’t be used against R2 to undermine its credibility for predictive performance evaluation.
October 28, 2023 at 5:24 PM
No problem. Exactly. I think that example is misleading and shouldn’t be used against R2 to undermine its credibility for predictive performance evaluation.
Goodness of prediction and goodness of detection of the detectable signal are two different things. You can do a good job in finding the true underlaying function and yet fail miserably in having a good prediction model.
October 28, 2023 at 3:03 PM
Goodness of prediction and goodness of detection of the detectable signal are two different things. You can do a good job in finding the true underlaying function and yet fail miserably in having a good prediction model.
I disagree, that shows why r2 is better than mse. The second example has significantly weaker signal and r2 shows that while mse doesn’t. 1 cm error in predicting a human’s height is very good while it’s rubbish for predicting an ant’s height (same mse, different context). R2 reflects the context
October 28, 2023 at 2:58 PM
I disagree, that shows why r2 is better than mse. The second example has significantly weaker signal and r2 shows that while mse doesn’t. 1 cm error in predicting a human’s height is very good while it’s rubbish for predicting an ant’s height (same mse, different context). R2 reflects the context