Skip to content

Decision Tree Regression with Python

Last Update: February 6, 2020

Supervised machine learning consists of finding which class output target data belongs to or predicting its value by mapping its optimal relationship with input predictors data. Main supervised learning tasks are classification and regression.

This topic is part of Regression Machine Learning with Python course. Feel free to take a look at Course Curriculum.

This tutorial has an educational and informational purpose and doesn’t constitute any type of forecasting, business, trading or investment advice. All content, including code and data, is presented for personal educational use exclusively and with no guarantee of exactness of completeness. Past performance doesn’t guarantee future results. Please read full Disclaimer.

An example of supervised learning algorithm is decision tree regression [1] which consists of predicting output target feature by optimally recursive binary node splitting of output target and input predictor features data into incrementally smaller nodes. Top node is root node, internal nodes are decision nodes and terminal nodes are leaf nodes. Tree pruning and time series cross-validation are used for lowering variance error source generated by a greater model complexity.

1. Algorithm definition.

Classification and regression trees (CART) algorithm consists of greedy top-down approach for finding optimal recursive binary node splits by locally minimizing variance at terminal nodes measured through mean squared error function at each stage.

2. Formula notation.

min\left ( mse \right )=\frac{1}{n}\sum_{t=1}^{n}\left ( y_{t}-\hat{y}_{t} \right )^{2}

\hat{y}_{t}=\frac{1}{m}\sum_{t=1}^{m}\left ( y_{t} \right )

Where y_{t} = output target feature data, \hat{y}_{t} = terminal node output target feature mean, n = number of observations, m = number of observations in terminal node.

3. Python code example.

3.1. Import Python packages [2].

import numpy as np
import pandas as pd
import sklearn.tree as ml

3.2. Decision tree regression data reading, target and predictor features creation, training and testing ranges delimiting.

  • Data: S&P 500® index replicating ETF (ticker symbol: SPY) daily adjusted close prices (2007-2015).
  • Data daily arithmetic returns used for target feature (current day) and predictor feature (previous day).
  • Target and predictor features creation, training and testing ranges delimiting not fixed and only included for educational purposes.
spy = pd.read_csv('Data//Decision-Tree-Regression-Data.txt', index_col='Date', parse_dates=True)
rspy = spy.pct_change(1)
rspy.columns = ['rspy']
rspy1 = rspy.shift(1)
rspy1.columns = ['rspy1']
rspyall = rspy
rspyall = rspyall.join(rspy1)
rspyall = rspyall.dropna()
rspyt = rspyall['2007-01-01':'2014-01-01']
rspyf = rspyall['2014-01-01':'2016-01-01']

3.3. Decision tree regression fitting, structure and output.

  • Decision tree regression fitting within training range.
  • Decision tree regression fitting parameters not fixed and only included for educational purposes.
dtt = ml.DecisionTreeRegressor(criterion='mse', max_depth=1).fit(np.array(rspyt['rspy1']).reshape(-1, 1), rspyt['rspy'])
In:
dtts = [{'0': 'Node:', '1': 'Y Value:', '2': 'Split Threshold:'},
        {'0': 'Root', '1': np.round(dtt.tree_.value[0], 6), '2': np.round(dtt.tree_.threshold[0], 6)},
        {'0': 'Terminal Left', '1': np.round(dtt.tree_.value[1], 6), '2': ''},
        {'0': 'Terminal Right', '1': np.round(dtt.tree_.value[2], 6), '2': ''}]
print('== Decision Tree Regression Structure ==')
print(pd.DataFrame(dtts))
Out:
== Decision Tree Regression Structure ==
                0             1                 2
0           Node:      Y Value:  Split Threshold:
1            Root  [[0.000342]]         -0.072036
2   Terminal Left  [[0.043869]]                  
3  Terminal Right  [[0.000243]]
4. References.

[1] Breiman L, Friedman JH, Olshen RA, Stone CJ. “Classification and Regression Trees”. CRC Press. 1984.

[2] Travis E, Oliphant. “A guide to NumPy”. USA: Trelgol Publishing. 2006.

Stéfan van der Walt, S. Chris Colbert and Gaël Varoquaux. “The NumPy Array: A Structure for Efficient Numerical Computation”. Computing in Science & Engineering. 2011.

Wes McKinney. “Data Structures for Statistical Computing in Python.” Proceedings of the 9th Python in Science Conference. 2010.

Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Édouard Duchesnay. “Scikit-learn: Machine Learning in Python”. Journal of Machine Learning Research. 2011.

My online courses are closed for enrollment.
+