Maintain and develop: MTEB, ScandEval, tomsup, DaCy, etc.
#NLPProc
We share both the code and update the leaderboard with new releases:
🔗 Website: euroeval.com/leaderboards...
📄 Paper: arxiv.org/abs/2406.13469
👩💻 GitHub: github.com/EuroEval/Eur...
We share both the code and update the leaderboard with new releases:
🔗 Website: euroeval.com/leaderboards...
📄 Paper: arxiv.org/abs/2406.13469
👩💻 GitHub: github.com/EuroEval/Eur...
📄 Paper: euroeval.com/leaderboards...
🔗 Website: euroeval.com
👩💻 GitHub: github.com/EuroEval/Eur...
📄 Paper: euroeval.com/leaderboards...
🔗 Website: euroeval.com
👩💻 GitHub: github.com/EuroEval/Eur...
euroeval.com/extras/radia...
euroeval.com/extras/radia...
📑 Paper: arxiv.org/abs/2502.135...
📈 Leaderboard: huggingface.co/spaces/mteb/...
👩💻 GitHub: github.com/embeddings-b...
📑 Paper: arxiv.org/abs/2502.135...
📈 Leaderboard: huggingface.co/spaces/mteb/...
👩💻 GitHub: github.com/embeddings-b...
We find that smaller multilingual models (~500M) outperform notably larger 7B models, likely due to a limited multilingual pre-training.
We find that smaller multilingual models (~500M) outperform notably larger 7B models, likely due to a limited multilingual pre-training.