Coordinate Descent for SLOPE

SLOPE
Optimization
Authors

Johan Larsson

Quentin Klopfenstein

Mathurin Massias

Jonas Wallin

Published

25 April 2023

Details

Proceedings of the 26th international conference on artificial intelligence and statistics, vol. 206, pp. 4802–4821

Links
Abstract

The lasso is the most famous sparse regression and feature selection method. One reason for its popularity is the speed at which the underlying optimization problem can be solved. Sorted L-One Penalized Estimation (SLOPE) is a generalization of the lasso with appealing statistical properties. In spite of this, the method has not yet reached widespread interest. A major reason for this is that current software packages that fit SLOPE rely on algorithms that perform poorly in high dimensions. To tackle this issue, we propose a new fast algorithm to solve the SLOPE optimization problem, which combines proximal gradient descent and proximal coordinate descent steps. We provide new results on the directional derivative of the SLOPE penalty and its related SLOPE thresholding operator, as well as provide convergence guarantees for our proposed solver. In extensive benchmarks on simulated and real data, we show that our method outperforms a long list of competing algorithms.

 

Citation

BibTeX citation:
@inproceedings{larsson2023,
  author = {Larsson, Johan and Klopfenstein, Quentin and Massias,
    Mathurin and Wallin, Jonas},
  editor = {Francisco, Ruiz and Dy, Jennifer and van de Meent,
    Jan-Willem},
  publisher = {PMLR},
  title = {Coordinate Descent for {SLOPE}},
  booktitle = {Proceedings of the 26th international conference on
    artificial intelligence and statistics},
  series = {Proceedings of machine learning research},
  volume = {206},
  pages = {4802–4821},
  date = {2023-04-25},
  url = {https://proceedings.mlr.press/v206/larsson23a.html},
  langid = {en},
  abstract = {The lasso is the most famous sparse regression and feature
    selection method. One reason for its popularity is the speed at
    which the underlying optimization problem can be solved. Sorted
    L-One Penalized Estimation (SLOPE) is a generalization of the lasso
    with appealing statistical properties. In spite of this, the method
    has not yet reached widespread interest. A major reason for this is
    that current software packages that fit SLOPE rely on algorithms
    that perform poorly in high dimensions. To tackle this issue, we
    propose a new fast algorithm to solve the SLOPE optimization
    problem, which combines proximal gradient descent and proximal
    coordinate descent steps. We provide new results on the directional
    derivative of the SLOPE penalty and its related SLOPE thresholding
    operator, as well as provide convergence guarantees for our proposed
    solver. In extensive benchmarks on simulated and real data, we show
    that our method outperforms a long list of competing algorithms.}
}
For attribution, please cite this work as:
Larsson, Johan, Quentin Klopfenstein, Mathurin Massias, and Jonas Wallin. 2023. “Coordinate Descent for SLOPE.” In Proceedings of the 26th International Conference on Artificial Intelligence and Statistics, edited by Ruiz Francisco, Jennifer Dy, and Jan-Willem van de Meent, 206:4802–21. Proceedings of Machine Learning Research. PMLR. https://proceedings.mlr.press/v206/larsson23a.html.