Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
Web-Bench / README.md
guxiaowu's picture
Upload folder using huggingface_hub
90903c8 verified
metadata
license: cc-by-4.0

Web-Bench

English | 中文 README

📖 Overview

Web-Bench is a benchmark designed to evaluate the performance of LLMs in actual Web development. Web-Bench contains 50 projects, each consisting of 20 tasks with sequential dependencies. The tasks implement project features in sequence, simulating real-world human development workflows. When designing Web-Bench, we aim to cover the foundational elements of Web development: Web Standards and Web Frameworks. Given the scale and complexity of these projects, which were designed by engineers with 5-10 years of experience, each presents a significant challenge. On average, a single project takes 4–8 hours for a senior engineer to complete. On our given benchmark agent (Web-Agent), SOTA (Claude 3.7 Sonnet) achieves only 25.1% Pass@1.

The distribution of the experimental data aligns well with the current code generation capabilities of mainstream LLMs. pass@1

HumanEval and MBPP have approached saturation. APPS and EvalPlus are approaching saturation. The SOTA for Web-Bench is 25.1%, which is lower (better) than that of the SWE-bench Full and Verified sets. SOTAs

Web-Bench: A LLM Code Benchmark Based on Web Standards and Frameworks

The datasets was presented in the paper Web-Bench: A LLM Code Benchmark Based on Web Standards and Frameworks.

🏅 Leaderboard

Leaderboard

Dataset Structure

An example of a Web-Bench datum is as follows:

id: (str) Task id, init | task-n
project: (str) Task project name
description: (str) Task details description
date: (str) Task publish date, filter contaminated model
level: (str) Task level: easy | moderate | challenging

📘 Usage

GitHub