The hype is real: a mysterious gpt2-chatbot model has appeared on the LLM Arena Leaderboard 👀. It seems to be at least on par with the top performing models (closed and open).
To try it out: https://chat.lmsys.org/ -> then click on the Direct Chat tab and select gpt2-chatbot.
How Robust Is Your Model in Complex Code Generation Tasks? 🤔
We've launched the PECC benchmark to challenge chat models in code generation, drawing from the Advent of Code for programming tasks and the Euler Project for math-heavy challenges. This new task tests models with problems presented in both detailed prose and concise "leet code" styles, evaluating their ability to understand and solve complex coding issues and math problem in chat-based interactions.
It seems that the Claude 3 models outperforme ChatGPT: Model / Avg. (pass@3) Claude 3 Haiku / 27.67 GPT-3.5-Turbo / 23.75 Mixtral-8x22B-Instruct-v0.1 / 8.35
Some tokens are more relevant than others, and some are mostly noise (just look up the history of 𝘚𝘰𝘭𝘪𝘥𝘎𝘰𝘭𝘥𝘔𝘢𝘨𝘪𝘬𝘢𝘳𝘱).
So this paper introduces 𝗦𝗲𝗹𝗲𝗰𝘁𝗶𝘃𝗲 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗠𝗼𝗱𝗲𝗹𝗶𝗻𝗴, which is actually really simple: ➡️ A specific metric measures the relevance of each token. Then during training, only the top k% tokens for this relevance metric count in the loss calculation.
Authors test this method by training models on the difficult MATH dataset (only competition mathematics problems).
➡️ Their technique seems like a new must-do in LLM training: Training is much faster and reaches an impressive performance!
𝐑𝐞𝐬𝐮𝐥𝐭𝐬: ◆ ⏱️ Training is x5 to x10 faster to reach equivalent performance compared to standard language modeling. ◆ 💪 Their 1B model achieves close to GPT4 Chain-of-Thought performance on MATH! ◆ 🚀 Their 7B model match performance of the state-of-the-art DeepSeek for the same size, while trained on only 3% of tokens
𝐀𝐝𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬 💡 ◆ Datasets used for pre-training, even after pre-filtering, still contain a large proportion of noisy tokens 😖 ◆ Authors show that when you reduce loss on noisy tokens, you actually reduce accuracy (Figure 7). So Selective Language Modeling seems fundamental! ✅