openmdao-benchmarks / README.md
englund's picture
Upload folder using huggingface_hub
ee8a07d verified
metadata
title: OpenMDAO Optimization Benchmarks
tags:
  - optimization
  - engineering
  - openmdao
  - benchmarking
  - scipy
license: apache-2.0
task_categories:
  - tabular-regression
  - tabular-classification
size_categories:
  - n<1K

OpenMDAO Optimization Benchmarks

This dataset contains comprehensive benchmarking results from OpenMDAO optimization runs on standard test problems from the optimization literature.

Dataset Description

  • Total Samples: 55
  • Problems: 5 literature-validated test functions (Rosenbrock, Beale, Booth, Rastrigin, Ackley)
  • Optimizers: 3 algorithms (SLSQP, COBYLA, L-BFGS-B)
  • Multiple Runs: 3-5 runs per optimizer-problem combination
  • Created: 2025-08-24

Key Results

  • Best Performer: SLSQP (63% success rate)
  • Problem Difficulty: Rosenbrock (70% success) → Booth (67%) → Beale (36%) → Ackley/Rastrigin (0%)
  • Comprehensive Metrics: Accuracy, efficiency, robustness scores included

Problems Included

  1. Rosenbrock Function - Classic banana function (moderate difficulty)

    • Global optimum: [1.0, 1.0], minimum value: 0.0
    • Reference: Rosenbrock, H.H. (1960)
  2. Beale Function - Multimodal valley function (moderate difficulty)

    • Global optimum: [3.0, 0.5], minimum value: 0.0
    • Reference: Beale, E.M.L. (1958)
  3. Booth Function - Simple quadratic bowl (easy)

    • Global optimum: [1.0, 3.0], minimum value: 0.0
    • Reference: Standard test function
  4. Rastrigin Function - Highly multimodal (hard)

    • Global optimum: [0.0, 0.0], minimum value: 0.0
    • Reference: Rastrigin, L.A. (1974)
  5. Ackley Function - Multimodal with many local minima (hard)

    • Global optimum: [0.0, 0.0], minimum value: 0.0
    • Reference: Ackley, D.H. (1987)

Optimizers Benchmarked

  • SLSQP: Sequential Least Squares Programming (gradient-based)

    • Success rate: 63%
    • Best for: Smooth, well-behaved functions
  • COBYLA: Constrained Optimization BY Linear Approximations (derivative-free)

    • Success rate: 0% (on these test problems)
    • Better for: Constraint-heavy problems
  • L-BFGS-B: Limited-memory BFGS with bounds (gradient-based)

    • Success rate: 41%
    • Good for: Large-scale optimization

Dataset Structure

Each record contains:

Basic Information

  • run_id: Unique identifier
  • optimizer: Algorithm used
  • problem: Test function name
  • dimension: Problem dimensionality

Results

  • optimal_value: Final objective value
  • optimal_point: Final design variables
  • error_from_known: Distance from known global optimum
  • success: Boolean convergence flag

Performance Metrics

  • iterations: Number of optimization iterations
  • function_evaluations: Objective function calls
  • time_elapsed: Wall clock time (seconds)
  • convergence_rate: Rate of convergence

Evaluation Scores

  • accuracy_score: 1/(1 + error_from_known)
  • efficiency_score: 1/(1 + iterations/50)
  • robustness_score: Convergence stability
  • overall_score: Weighted combination

Metadata

  • convergence_history: Last 10 objective values
  • problem_reference: Literature citation
  • timestamp: When run was executed

Usage Examples

import json
import pandas as pd

# Load the dataset
with open('data.json', 'r') as f:
    data = json.load(f)

df = pd.DataFrame(data)

# Analyze success rates by optimizer
success_by_optimizer = df.groupby('optimizer')['success'].mean()
print("Success rates:", success_by_optimizer)

# Find best performing runs
best_runs = df.nlargest(10, 'overall_score')
print("Top 10 runs:")
print(best_runs[['optimizer', 'problem', 'overall_score']])

# Problem difficulty analysis
difficulty = df.groupby('problem')['success'].mean().sort_values(ascending=False)
print("Problem difficulty ranking:", difficulty)

Research Applications

This dataset enables several research directions:

  1. Algorithm Selection: Predict best optimizer for given problem characteristics
  2. Performance Modeling: Build models to predict optimization outcomes
  3. Hyperparameter Tuning: Optimize algorithm parameters
  4. Problem Classification: Categorize problems by difficulty
  5. Convergence Analysis: Study optimization trajectories

Quality Assurance

  • ✅ Literature-validated test problems
  • ✅ Multiple runs for statistical significance
  • ✅ Comprehensive evaluation metrics
  • ✅ Real convergence data (not synthetic)
  • ✅ Proper error analysis and success criteria

Citation

If you use this dataset, please cite:

@dataset{openmdao_benchmarks_2025,
  author = {OpenMDAO Development Team},
  title = {OpenMDAO Optimization Benchmarks},
  year = {2025},
  url = {https://huggingface.co/datasets/englund/openmdao-benchmarks},
  note = {Comprehensive benchmarking of optimization algorithms on standard test functions}
}

License

Apache 2.0 - Free for research and commercial use.

Contact

For questions or contributions, please open an issue on the dataset repository.


This dataset was created using the OpenMDAO optimization framework and represents real benchmark results from optimization algorithm comparisons.