A slight modification to the linear regression parametric Bootstrap allows one to use a non-parametric Bootstrap, i.e. where we can remove the assumption of Normally distributed residuals which may often not be very accurate. For the non-parametric model, we must first develop a non-parametric distribution of residuals by changing them to have constant variance. We define the *modified residual* *r_{i}* as follows:

r_{i}=\frac{e_{i}}{(1-h_{i})^{1/2}} |

where the *leverage* *h _{i}* is given by:

h_{i}=\frac{1}{n}+\frac{(x_{i}-\bar{x})^{2}}{SS_{xx}} |

The mean of the modified residuals
\bar{r} is calculated. Then a Bootstrap sample *r_{i}** is drawn from the set of

*r*values and used to determine the quantity (\widehat{y}_{j}+r_{j}*-\bar{r})for each

_{i}*x*value which is used in step 2 of the algorithm above. The model Non-Parametric Regression Bootstrap provides an illustration.

_{j}

The links to the Non-Parametric Regression Bootstrap software specific models are provided here:

In certain problems, it is logical that the *y*-intercept value *c* be set to zero. In this situation, the leverage values are different:

h_{i}=\frac{x^{2}_{i}}{\displaystyle\sum_{j-1}^{n}x^{2}_{j}} |

The modified residuals are thus also different and won't sum to zero, so it is essential to mean-correct the residuals before they are used to simulate random errors.

Bootstrapping the data pairs is more robust than Bootstrapping the residuals as it is less sensitive to any deviation from the regression assumptions, but won't be as accurate where the assumptions are correct. However, as the data set increases in size, the results from Bootstrapping the pairs approaches that for Bootstrapping the residual and it is also easier to execute, of course. These techniques can be extended to non-linear, non-constant variance and to multiple linear regressions, described in detail in Efron and Tibshirani (1993) and Davison and Hinkley (1997).