-
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language
Paper • 2506.20920 • Published • 69 -
SmolVLM: Redefining small and efficient multimodal models
Paper • 2504.05299 • Published • 197 -
YourBench: Easy Custom Evaluation Sets for Everyone
Paper • 2504.01833 • Published • 22 -
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper • 2502.02737 • Published • 242
Collections
Discover the best community collections!
Collections including paper arxiv:2305.06161
-
Design2Code: How Far Are We From Automating Front-End Engineering?
Paper • 2403.03163 • Published • 99 -
Wukong: Towards a Scaling Law for Large-Scale Recommendation
Paper • 2403.02545 • Published • 17 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models
Paper • 2308.10462 • Published • 2
-
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Paper • 2306.08568 • Published • 28 -
SantaCoder: don't reach for the stars!
Paper • 2301.03988 • Published • 7 -
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 66
-
FinGPT: Large Generative Models for a Small Language
Paper • 2311.05640 • Published • 32 -
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Paper • 2311.05556 • Published • 87 -
Distributed Deep Learning in Open Collaborations
Paper • 2106.10207 • Published • 2 -
Datasets: A Community Library for Natural Language Processing
Paper • 2109.02846 • Published • 14
-
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Paper • 2311.11077 • Published • 29 -
Tensor Product Attention Is All You Need
Paper • 2501.06425 • Published • 89 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 46 -
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Paper • 2403.03853 • Published • 66
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 66 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 59
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 10 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 110 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 118
-
Attention Is All You Need
Paper • 1706.03762 • Published • 81 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 20 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 17
-
Creative Robot Tool Use with Large Language Models
Paper • 2310.13065 • Published • 9 -
CodeCoT and Beyond: Learning to Program and Test like a Developer
Paper • 2308.08784 • Published • 5 -
Lemur: Harmonizing Natural Language and Code for Language Agents
Paper • 2310.06830 • Published • 34 -
CodePlan: Repository-level Coding using LLMs and Planning
Paper • 2309.12499 • Published • 78
-
FineWeb2: One Pipeline to Scale Them All -- Adapting Pre-Training Data Processing to Every Language
Paper • 2506.20920 • Published • 69 -
SmolVLM: Redefining small and efficient multimodal models
Paper • 2504.05299 • Published • 197 -
YourBench: Easy Custom Evaluation Sets for Everyone
Paper • 2504.01833 • Published • 22 -
SmolLM2: When Smol Goes Big -- Data-Centric Training of a Small Language Model
Paper • 2502.02737 • Published • 242
-
Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Paper • 2311.11077 • Published • 29 -
Tensor Product Attention Is All You Need
Paper • 2501.06425 • Published • 89 -
LoRA: Low-Rank Adaptation of Large Language Models
Paper • 2106.09685 • Published • 46 -
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Paper • 2403.03853 • Published • 66
-
OLMo: Accelerating the Science of Language Models
Paper • 2402.00838 • Published • 84 -
Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context
Paper • 2403.05530 • Published • 66 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling
Paper • 2312.15166 • Published • 59
-
Design2Code: How Far Are We From Automating Front-End Engineering?
Paper • 2403.03163 • Published • 99 -
Wukong: Towards a Scaling Law for Large-Scale Recommendation
Paper • 2403.02545 • Published • 17 -
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models
Paper • 2308.10462 • Published • 2
-
Language Agent Tree Search Unifies Reasoning Acting and Planning in Language Models
Paper • 2310.04406 • Published • 10 -
Chain-of-Thought Reasoning Without Prompting
Paper • 2402.10200 • Published • 110 -
ICDPO: Effectively Borrowing Alignment Capability of Others via In-context Direct Preference Optimization
Paper • 2402.09320 • Published • 6 -
Self-Discover: Large Language Models Self-Compose Reasoning Structures
Paper • 2402.03620 • Published • 118
-
StarCoder: may the source be with you!
Paper • 2305.06161 • Published • 31 -
WizardCoder: Empowering Code Large Language Models with Evol-Instruct
Paper • 2306.08568 • Published • 28 -
SantaCoder: don't reach for the stars!
Paper • 2301.03988 • Published • 7 -
DeepSeek-Coder: When the Large Language Model Meets Programming -- The Rise of Code Intelligence
Paper • 2401.14196 • Published • 66
-
Attention Is All You Need
Paper • 1706.03762 • Published • 81 -
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Paper • 1810.04805 • Published • 20 -
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Paper • 1907.11692 • Published • 9 -
DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter
Paper • 1910.01108 • Published • 17
-
FinGPT: Large Generative Models for a Small Language
Paper • 2311.05640 • Published • 32 -
LCM-LoRA: A Universal Stable-Diffusion Acceleration Module
Paper • 2311.05556 • Published • 87 -
Distributed Deep Learning in Open Collaborations
Paper • 2106.10207 • Published • 2 -
Datasets: A Community Library for Natural Language Processing
Paper • 2109.02846 • Published • 14
-
Creative Robot Tool Use with Large Language Models
Paper • 2310.13065 • Published • 9 -
CodeCoT and Beyond: Learning to Program and Test like a Developer
Paper • 2308.08784 • Published • 5 -
Lemur: Harmonizing Natural Language and Code for Language Agents
Paper • 2310.06830 • Published • 34 -
CodePlan: Repository-level Coding using LLMs and Planning
Paper • 2309.12499 • Published • 78