title: OpenMDAO Optimization Benchmarks
tags:
- optimization
- engineering
- openmdao
- benchmarking
- scipy
license: apache-2.0
task_categories:
- tabular-regression
- tabular-classification
size_categories:
- n<1K
OpenMDAO Optimization Benchmarks
This dataset contains comprehensive benchmarking results from OpenMDAO optimization runs on standard test problems from the optimization literature.
Dataset Description
- Total Samples: 55
- Problems: 5 literature-validated test functions (Rosenbrock, Beale, Booth, Rastrigin, Ackley)
- Optimizers: 3 algorithms (SLSQP, COBYLA, L-BFGS-B)
- Multiple Runs: 3-5 runs per optimizer-problem combination
- Created: 2025-08-24
Key Results
- Best Performer: SLSQP (63% success rate)
- Problem Difficulty: Rosenbrock (70% success) → Booth (67%) → Beale (36%) → Ackley/Rastrigin (0%)
- Comprehensive Metrics: Accuracy, efficiency, robustness scores included
Problems Included
Rosenbrock Function - Classic banana function (moderate difficulty)
- Global optimum: [1.0, 1.0], minimum value: 0.0
- Reference: Rosenbrock, H.H. (1960)
Beale Function - Multimodal valley function (moderate difficulty)
- Global optimum: [3.0, 0.5], minimum value: 0.0
- Reference: Beale, E.M.L. (1958)
Booth Function - Simple quadratic bowl (easy)
- Global optimum: [1.0, 3.0], minimum value: 0.0
- Reference: Standard test function
Rastrigin Function - Highly multimodal (hard)
- Global optimum: [0.0, 0.0], minimum value: 0.0
- Reference: Rastrigin, L.A. (1974)
Ackley Function - Multimodal with many local minima (hard)
- Global optimum: [0.0, 0.0], minimum value: 0.0
- Reference: Ackley, D.H. (1987)
Optimizers Benchmarked
SLSQP: Sequential Least Squares Programming (gradient-based)
- Success rate: 63%
- Best for: Smooth, well-behaved functions
COBYLA: Constrained Optimization BY Linear Approximations (derivative-free)
- Success rate: 0% (on these test problems)
- Better for: Constraint-heavy problems
L-BFGS-B: Limited-memory BFGS with bounds (gradient-based)
- Success rate: 41%
- Good for: Large-scale optimization
Dataset Structure
Each record contains:
Basic Information
run_id
: Unique identifieroptimizer
: Algorithm usedproblem
: Test function namedimension
: Problem dimensionality
Results
optimal_value
: Final objective valueoptimal_point
: Final design variableserror_from_known
: Distance from known global optimumsuccess
: Boolean convergence flag
Performance Metrics
iterations
: Number of optimization iterationsfunction_evaluations
: Objective function callstime_elapsed
: Wall clock time (seconds)convergence_rate
: Rate of convergence
Evaluation Scores
accuracy_score
: 1/(1 + error_from_known)efficiency_score
: 1/(1 + iterations/50)robustness_score
: Convergence stabilityoverall_score
: Weighted combination
Metadata
convergence_history
: Last 10 objective valuesproblem_reference
: Literature citationtimestamp
: When run was executed
Usage Examples
import json
import pandas as pd
# Load the dataset
with open('data.json', 'r') as f:
data = json.load(f)
df = pd.DataFrame(data)
# Analyze success rates by optimizer
success_by_optimizer = df.groupby('optimizer')['success'].mean()
print("Success rates:", success_by_optimizer)
# Find best performing runs
best_runs = df.nlargest(10, 'overall_score')
print("Top 10 runs:")
print(best_runs[['optimizer', 'problem', 'overall_score']])
# Problem difficulty analysis
difficulty = df.groupby('problem')['success'].mean().sort_values(ascending=False)
print("Problem difficulty ranking:", difficulty)
Research Applications
This dataset enables several research directions:
- Algorithm Selection: Predict best optimizer for given problem characteristics
- Performance Modeling: Build models to predict optimization outcomes
- Hyperparameter Tuning: Optimize algorithm parameters
- Problem Classification: Categorize problems by difficulty
- Convergence Analysis: Study optimization trajectories
Quality Assurance
- ✅ Literature-validated test problems
- ✅ Multiple runs for statistical significance
- ✅ Comprehensive evaluation metrics
- ✅ Real convergence data (not synthetic)
- ✅ Proper error analysis and success criteria
Citation
If you use this dataset, please cite:
@dataset{openmdao_benchmarks_2025,
author = {OpenMDAO Development Team},
title = {OpenMDAO Optimization Benchmarks},
year = {2025},
url = {https://huggingface.co/datasets/englund/openmdao-benchmarks},
note = {Comprehensive benchmarking of optimization algorithms on standard test functions}
}
License
Apache 2.0 - Free for research and commercial use.
Contact
For questions or contributions, please open an issue on the dataset repository.
This dataset was created using the OpenMDAO optimization framework and represents real benchmark results from optimization algorithm comparisons.