Skip to content

K Nearest Neighbors Regression with R

Last Update: December 30, 2020

Supervised machine learning consists of finding which class output target data belongs to or predicting its value by mapping its optimal relationship with input predictors data. Main supervised learning tasks are classification and regression.

This topic is part of Regression Machine Learning with R course. Feel free to take a look at Course Curriculum.

This tutorial has an educational and informational purpose and doesn’t constitute any type of forecasting, business, trading or investment advice. All content, including code and data, is presented for personal educational use exclusively and with no guarantee of exactness of completeness. Past performance doesn’t guarantee future results. Please read full Disclaimer.

An example of supervised learning algorithm is k nearest neighbors [1] which consists of predicting output target feature average by storing output target and input predictor features nearest neighbors data. Time series cross-validation is used for optimal number of nearest neighbors estimation or fine tuning.

1. Distance function definition.

Distance function consists of measuring similarity between output target and input predictor features data which can be done through Euclidean, Manhattan or Minkowski functions.

1.1. Euclidean distance function formula notation.

d_{Euc}(x,y)=\sqrt{\sum_{t=1}^{n}(x_{t}-y_{t})^2}

Where d_{Euc}(x,y) = input predictor and output target features data Euclidean distance, x_{t} = input predictor features data, y_{t} = output target feature data, n = number of observations.

2. Nearest neighbors algorithm definition.

Nearest neighbors algorithm consists of searching for output target feature nearest neighbors input predictor features data based on similarity metrics. For regression, ball tree, k-dimensional tree or brute force algorithms are used.

  • Algorithm objective consists of calculating average output target feature prediction of equal weighted or inverse of distance weighted nearest neighbors. For regression, average or arithmetic mean function is used.

2.1. Nearest neighbors algorithm formula notation.

\hat{y}_{t}=\frac{1}{k}\sum_{t=1}^{k}y_{t}

Where \hat{y}_{t} = output target feature prediction, y_{t} = nearest neighbors position output target feature data, k = number of nearest neighbors.

3. R script code example.

3.1. Load R packages [2].

library('quantmod')
library('caret')

3.2. K nearest neighbors regression data reading, target and predictor features creation, training and testing ranges delimiting.

  • Data: S&P 500® index replicating ETF (ticker symbol: SPY) daily adjusted close prices (2007-2015).
  • Data daily arithmetic returns used for target feature (current day) and predictor feature (previous day).
  • Target and predictor features creation, training and testing ranges delimiting not fixed and only included for educational purposes.
data <- read.csv('K-Nearest-Neighbors-Regression-Data.txt',header=T)
spy <- xts(data[,2],order.by=as.Date(data[,1]))
rspy <- dailyReturn(spy)
rspy1 <- lag(rspy,k=1)
rspyall <- cbind(rspy,rspy1)
colnames(rspyall) <- c('rspy','rspy1')
rspyall <- na.exclude(rspyall)
rspyt <- window(rspyall,end='2014-01-01')
rspyf <- window(rspyall,start='2014-01-01')

3.3. K nearest neighbors regression fitting, mean squared error calculation and output.

  • K nearest neighbors fitting and mean squared error calculation within training range.
  • K nearest neighbors number of neighbors considered not fixed and only included for educational purposes.
knnt1 <- knnreg(rspy~rspy1,data=rspyt,k=1)
knnt2 <- knnreg(rspy~rspy1,data=rspyt,k=2)
knnt1mse <- mean((rspyt$rspy-predict(knnt1,rspyt$rspy1))^2)
knnt2mse <- mean((rspyt$rspy-predict(knnt2,rspyt$rspy1))^2)
In:
knnt1mse
Out:
[1] 6.009445e-07
In:
knnt2mse
Out:
[1] 0.0001022911
4. References.

[1] N.S. Altman. “An introduction to kernel and nearest-neighbor nonparametric regression“. The American Statistician. 1992.

[2] Jeffrey A. Ryan and Joshua M. Ulrich. “quantmod: Quantitative Financial Modelling Framework”. R package version 0.4-17. 2020.

Max Kuhn. “caret: Classification and Regression Training”. R package version 6.0-86. 2020.

My online courses are closed for enrollment.
+