A computational approach to nonparametric regression: bootstrapping CMARS method

Creative Commons License

Yazici C., Yerlikaya-Özkurt F., BATMAZ İ.

Machine Learning, vol.101, no.1-3, pp.211-230, 2015 (SCI-Expanded) identifier identifier

  • Publication Type: Article / Article
  • Volume: 101 Issue: 1-3
  • Publication Date: 2015
  • Doi Number: 10.1007/s10994-015-5502-3
  • Journal Name: Machine Learning
  • Journal Indexes: Science Citation Index Expanded (SCI-EXPANDED), Scopus
  • Page Numbers: pp.211-230
  • Keywords: Bootstrapping regression, Conic multivariate adaptive regression splines, Fixed-X resampling, Random-X resampling, Wild bootstrap, Machine learning
  • TED University Affiliated: Yes


© 2015, The Author(s).Bootstrapping is a computer-intensive statistical method which treats the data set as a population and draws samples from it with replacement. This resampling method has wide application areas especially in mathematically intractable problems. In this study, it is used to obtain the empirical distributions of the parameters to determine whether they are statistically significant or not in a special case of nonparametric regression, conic multivariate adaptive regression splines (CMARS), a statistical machine learning algorithm. CMARS is the modified version of the well-known nonparametric regression model, multivariate adaptive regression splines (MARS), which uses conic quadratic optimization. CMARS is at least as complex as MARS even though it performs better with respect to several criteria. To achieve a better performance of CMARS with a less complex model, three different bootstrapping regression methods, namely, random-X, fixed-X and wild bootstrap are applied on four data sets with different size and scale. Then, the performances of the models are compared using various criteria including accuracy, precision, complexity, stability, robustness and computational efficiency. The results imply that bootstrap methods give more precise parameter estimates although they are computationally inefficient and that among all, random-X resampling produces better models, particularly for medium size and scale data sets.