Maintain and develop: MTEB, ScandEval, tomsup, DaCy, etc.
#NLPProc
euroeval.com/extras/radia...
euroeval.com/extras/radia...
We find that smaller multilingual models (~500M) outperform notably larger 7B models, likely due to a limited multilingual pre-training.
We find that smaller multilingual models (~500M) outperform notably larger 7B models, likely due to a limited multilingual pre-training.
This work implements >500 evaluation tasks across >1000 languages and covers a wide range of use cases and domains🩺👩💻⚖️
This work implements >500 evaluation tasks across >1000 languages and covers a wide range of use cases and domains🩺👩💻⚖️