Benchopt: Reproducible, Efficient and Collaborative Optimization Benchmarks

Optimization
Python
Authors

Thomas Moreau

Mathurin Massias

Alexandre Gramfort

Pierre Ablin

Pierre-Antoine Bannier

Benjamin Charlier

Mathieu Dagréou

Tom Dupré la Tour

Ghislain Durif

Cassio F. Dantas

Quentin Klopfenstein

Johan Larsson

En Lai

Tanguy Lefort

Benoit Malézieux

Badr Moufad

Binh T. Nguyen

Alain Rakotomamonjy

Zaccharie Ramzi

Joseph Salmon

Samuel Vaiter

Published

6 December 2022

Details

Advances in Neural Information Processing Systems, vol. 35, pp. 25404-25421

Links
Abstract

Numerical validation is at the core of machine learning research as it allows to assess the actual impact of new methods, and to confirm the agreement between theory and practice. Yet, the rapid development of the field poses several challenges: researchers are confronted with a profusion of methods to compare, limited transparency and consensus on best practices, as well as tedious re-implementation work. As a result, validation is often very partial, which can lead to wrong conclusions that slow down the progress of research. We propose Benchopt, a collaborative framework to automate, reproduce and publish optimization benchmarks in machine learning across programming languages and hardware architectures. Benchopt simplifies benchmarking for the community by providing an off-the-shelf tool for running, sharing and extending experiments. To demonstrate its broad usability, we showcase benchmarks on three standard learning tasks: \ell_2-regularized logistic regression, Lasso, and ResNet18 training for image classification. These benchmarks highlight key practical findings that give a more nuanced view of the state-of-the-art for these problems, showing that for practical evaluation, the devil is in the details. We hope that Benchopt will foster collaborative work in the community hence improving the reproducibility of research findings.

 

Citation

BibTeX citation:
@inproceedings{moreau2022,
  author = {Moreau, Thomas and Massias, Mathurin and Gramfort, Alexandre
    and Ablin, Pierre and Bannier, Pierre-Antoine and Charlier, Benjamin
    and Dagréou, Mathieu and Dupré la Tour, Tom and Durif, Ghislain and
    F. Dantas, Cassio and Klopfenstein, Quentin and Larsson, Johan and
    Lai, En and Lefort, Tanguy and Malézieux, Benoit and Moufad, Badr
    and T. Nguyen, Binh and Rakotomamonjy, Alain and Ramzi, Zaccharie
    and Salmon, Joseph and Vaiter, Samuel},
  editor = {Koyejo, S. and Mohamed, S. and Agarwal, A. and Belgrave, D.
    and Cho, K. and Oh, A.},
  title = {Benchopt: Reproducible, Efficient and Collaborative
    Optimization Benchmarks},
  booktitle = {Advances in Neural Information Processing Systems},
  volume = {35},
  pages = {25404-25421},
  date = {2022/12/06},
  url = {https://papers.nips.cc/paper_files/paper/2022/hash/a30769d9b62c9b94b72e21e0ca73f338-Abstract-Conference.html},
  langid = {en},
  abstract = {Numerical validation is at the core of machine learning
    research as it allows to assess the actual impact of new methods,
    and to confirm the agreement between theory and practice. Yet, the
    rapid development of the field poses several challenges: researchers
    are confronted with a profusion of methods to compare, limited
    transparency and consensus on best practices, as well as tedious
    re-implementation work. As a result, validation is often very
    partial, which can lead to wrong conclusions that slow down the
    progress of research. We propose Benchopt, a collaborative framework
    to automate, reproduce and publish optimization benchmarks in
    machine learning across programming languages and hardware
    architectures. Benchopt simplifies benchmarking for the community by
    providing an off-the-shelf tool for running, sharing and extending
    experiments. To demonstrate its broad usability, we showcase
    benchmarks on three standard learning tasks: \$\textbackslash
    ell\_2\$-regularized logistic regression, Lasso, and ResNet18
    training for image classification. These benchmarks highlight key
    practical findings that give a more nuanced view of the
    state-of-the-art for these problems, showing that for practical
    evaluation, the devil is in the details. We hope that Benchopt will
    foster collaborative work in the community hence improving the
    reproducibility of research findings.}
}
For attribution, please cite this work as:
Moreau, Thomas, Mathurin Massias, Alexandre Gramfort, Pierre Ablin, Pierre-Antoine Bannier, Benjamin Charlier, Mathieu Dagréou, et al. 2022–12AD. “Benchopt: Reproducible, Efficient and Collaborative Optimization Benchmarks.” In Advances in Neural Information Processing Systems, edited by S. Koyejo, S. Mohamed, A. Agarwal, D. Belgrave, K. Cho, and A. Oh, 35:25404–21. https://papers.nips.cc/paper_files/paper/2022/hash/a30769d9b62c9b94b72e21e0ca73f338-Abstract-Conference.html.