Decision trees are widely-used classification and regression models because of their interpretability and good accuracy. Classical methods such as CART are based on greedy approaches but a growing attention has recently been devoted to optimal decision trees. We investigate the nonlinear continuous optimization formulation proposed in Blanquero et al. (2020) for training sparse optimal randomized classification trees. Sparsity is important not only for feature selection but also to improve interpretability. We first consider alternative methods to sparsify such trees based on concave approximations of the l0 "norm". Promising results are obtained on 24 datasets in comparison with the original l1 and l infinity regularizations. Then, we derive bounds on the VC dimension of multivariate randomized classification trees. Finally, since training is computationally challenging for large datasets, we propose a general node-based decomposition scheme and a practical version of it. Experiments on larger datasets show that the proposed decomposition method is able to significantly reduce the training times without compromising the testing accuracy.

On multivariate randomized classification trees: l(0)-based sparsity, VC dimension and decomposition methods

Amaldi, E;Consolo, A;
2023-01-01

Abstract

Decision trees are widely-used classification and regression models because of their interpretability and good accuracy. Classical methods such as CART are based on greedy approaches but a growing attention has recently been devoted to optimal decision trees. We investigate the nonlinear continuous optimization formulation proposed in Blanquero et al. (2020) for training sparse optimal randomized classification trees. Sparsity is important not only for feature selection but also to improve interpretability. We first consider alternative methods to sparsify such trees based on concave approximations of the l0 "norm". Promising results are obtained on 24 datasets in comparison with the original l1 and l infinity regularizations. Then, we derive bounds on the VC dimension of multivariate randomized classification trees. Finally, since training is computationally challenging for large datasets, we propose a general node-based decomposition scheme and a practical version of it. Experiments on larger datasets show that the proposed decomposition method is able to significantly reduce the training times without compromising the testing accuracy.
2023
Machine Learning
Randomized classification trees
Sparsity
Decomposition methods
Nonlinear programming
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11311/1260972
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 2
  • ???jsp.display-item.citation.isi??? ND
social impact