
The top 1, 2, 3 resolve to 50-30-20
Resolution will be based on LMSYS Arena Leaderboard. If it stop to exist, then I will choose some other benchmarks
For example, 2023 EOY would resolve to Mistral-50, Yi-30, Meta-20
@mods since the creator deleted their account, I think its best to ask yall. As of now, the top 3 are open weights models (discounting Qwen-Max because its closed source unlike their other releases):
1. kimi-k2-thinking-turbo, Moonshot AI
2. glm-4.6, Z.ai
3. deepseek-v3.2-exp, Deepseek
As I understand the resolution criteria, other category (since Moonshoot and Z.ai are not listed) would resolve 80% and Deepseek would resolve 20%?
@Stralor ah maybe I'm fast on the gun, this is about "how might this resolve?" which I think yeah your plan makes sense. I'll reopen it for now
@Stralor I do wanna challenge: what does "Open Weight" mean, and bc that isn't "open source" and I wonder if Qwen really does qualify
@Stralor generally when people mean open source they actually mean open weights. Like meta used to call llama releases open source but they don’t reveal all of the training steps or data alongside the model weights themselves
I am telling yall, dont underestimate Deepseek. They already figured out how to do MoE
Also, Mistral is probably not gonna open weights their better models anymore, so Mixtral 8x7b-Small is the best we will get
It is not sure Llama 3 will be open source either