0% found this document useful (0 votes)
10 views3 pages

Resampling-Methods 411210002

The document discusses various resampling methods, including linear regression and polynomial regression applied to the 'Auto' dataset. It evaluates model performance using mean squared error and cross-validation techniques. Additionally, it introduces a bootstrap method to estimate the variance of a statistic derived from a portfolio dataset.

Uploaded by

hubertkuo418
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

Resampling-Methods 411210002

The document discusses various resampling methods, including linear regression and polynomial regression applied to the 'Auto' dataset. It evaluates model performance using mean squared error and cross-validation techniques. Additionally, it introduces a bootstrap method to estimate the variance of a statistic derived from a portfolio dataset.

Uploaded by

hubertkuo418
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Resampling Methods

2024-11-15
library(ISLR2)
set.seed(1)
train=sample(392, 196)

lm.fit = lm(mpg~horsepower, data = Auto, subset = train)


attach(Auto)
mean((mpg - predict(lm.fit, Auto))[-train]^2)

## [1] 23.26601

lm.fit2 = lm(mpg~poly(horsepower, 2), data = Auto, subset = train)


mean((mpg - predict(lm.fit2, Auto))[-train]^2)

## [1] 18.71646

lm.fit3 = lm(mpg~poly(horsepower, 3), data = Auto, subset = train)


mean((mpg - predict(lm.fit3, Auto))[-train]^2)

## [1] 18.79401

set.seed(2)
train = sample(392, 196)

lm.fit = lm(mpg~horsepower, subset = train)


mean((mpg - predict(lm.fit, Auto))[-train]^2)

## [1] 25.72651

lm.fit2 = lm(mpg~poly(horsepower, 2), data = Auto, subset = train)


mean((mpg - predict(lm.fit2, Auto))[-train]^2)

## [1] 20.43036

lm.fit3 = lm(mpg~poly(horsepower, 3), data = Auto, subset = train)


mean((mpg - predict(lm.fit3, Auto))[-train]^2)

## [1] 20.38533

glm.fit = glm(mpg~horsepower, data = Auto)


coef(glm.fit)

## (Intercept) horsepower
## 39.9358610 -0.1578447

lm.fit = lm(mpg~horsepower, data = Auto)


coef(lm.fit)
## (Intercept) horsepower
## 39.9358610 -0.1578447

library(boot)
glm.fit = glm(mpg~horsepower, data = Auto)
cv.err = cv.glm(Auto, glm.fit)
cv.err$delta

## [1] 24.23151 24.23114

cv.error = rep(0, 5)
for(i in 1:5){
lm.fit = glm(mpg~poly(horsepower, i), data = Auto)
cv.error[i] = cv.glm(Auto, glm.fit)$delta[1]
}
cv.error

## [1] 24.23151 24.23151 24.23151 24.23151 24.23151

set.seed(17)
cv.error.10 = rep(0:10)
for(i in 1:10){
lm.fit = glm(mpg~poly(horsepower, i), data = Auto)
cv.error.10[i] = cv.glm(Auto, glm.fit, K = 10)$delta[1]
}
cv.error.10

## [1] 24.27207 24.08261 24.29863 24.25307 24.22995 24.24427 24.28884


24.17441
## [9] 24.16991 24.21398 10.00000

alpha.fn = function(data, index){


X = data$X[index]
Y = data$Y[index]
(var(Y) - cov(X, Y))/(var(X) + var(Y) - 2*cov(X, Y))
}

alpha.fn(Portfolio, 1:100)

## [1] 0.5758321

set.seed(1)
alpha.fn(Portfolio, sample(100, 100, replace = T))

## [1] 0.7368375

boot(Portfolio, alpha.fn, R = 1000)

##
## ORDINARY NONPARAMETRIC BOOTSTRAP
##
##
## Call:
## boot(data = Portfolio, statistic = alpha.fn, R = 1000)
##
##
## Bootstrap Statistics :
## original bias std. error
## t1* 0.5758321 -0.001695873 0.09366347

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy