Abstract: This paper gives a literature review of choosing tuning parameters for ridge regression and lasso. These regularized regressions introduce a little bias in return for a considerable decrease in the variance of the predicted values, thus increasing prediction accuracy. We can use AIC or BIC to select the tuning parameter for linear predictive models. However, for general predictive models, cross-validation and bootstrap work better because they directly estimate prediction error. Though, cross-validation is more widely used than bootstrap. An empirical example is employed to illustrate how cross-validated lasso works. It shows that lasso may solve the multicollinearity problem.