0.632+来自Caret训练模型的R中的Bootstrap预测区间

nvbavucw  于 2023-09-27  发布在  Bootstrap
关注(0)|答案(1)|浏览(79)

我正在尝试用R编写一个函数,它使用0.632+ Bootstrap方法从经过训练的插入符号模型(即“train”对象)计算中心预测和上下预测区间。
在此过程中,我尝试遵循一个Python示例(https://www.saattrupdan.com/posts/2020-03-01-bootstrap-prediction)作为指导。然而,我在R* 中复制它时遇到了麻烦。任何指导将不胜感激。
我的函数应该接受一个经过训练的插入符号模型,训练数据和新数据作为输入和返回预测区间。然而,目前,我的预测区间值不正确

  • 正如Mark Rieke在评论中强调的那样,一个问题是需要为每个 Bootstrap 分割完成整个0.632+过程,但我目前的代码无法做到这一点。

下面是我的当前代码:

library(caret)

# Set the random seed for reproducibility
set.seed(123)

# Generate data
n <- 100
explainer <- runif(n)
y <- 1 + 0.2 * explainer + rnorm(n)
data <- data.frame(explainer, y)

# Fit linear regression models
fit_simple <- lm(y ~ explainer) # A plain old linear model
fit_caret <- train(
  y = y,
  x = data.frame(explainer),
  method = "lm"
) # An identical model, but fit using caret

new_data <- data.frame(explainer = runif(15, min = -10, max = 10))

# Function to calculate prediction intervals using 0.632+ Bootstrap
calculate_prediction_intervals <- function(model, new_data, alpha = 0.05) {
  # Extract training data and outcomes from the model
  X_train <- base::subset(model$trainingData, select = -c(.outcome))
  y_train <- as.numeric(model$trainingData$.outcome)
  n <- nrow(X_train)
  nbootstraps <- as.integer(sqrt(n))
  
  # Initialize matrices to store bootstrap predictions and validation residuals
  bootstrap_preds <- matrix(0, nrow(new_data), nbootstraps)
  val_residuals <- matrix(0, n, nbootstraps)
  
  for (b in 1:nbootstraps) {
    train_idxs <- sample(1:n, n, replace = TRUE)
    val_idxs <- setdiff(1:n, train_idxs)
    
    # Fit a bootstrap sample of the model
    fit_b <- train(
      y = y_train[train_idxs],
      x = X_train[train_idxs, , drop = FALSE],
      method = model$method,
      tuneGrid = model$bestTune,
      trControl = trainControl(method = "none", savePredictions = FALSE)
    )
    
    # Compute validation set predictions and residuals
    preds_val <- predict(fit_b, newdata = X_train[val_idxs, , drop = FALSE])
    val_residuals[val_idxs, b] <- y_train[val_idxs] - preds_val
    
    # Compute bootstrap predictions on new data
    preds_new <- predict(fit_b, newdata = new_data)
    bootstrap_preds[, b] <- preds_new
  }
  
  # Center the bootstrap predictions and residuals
  bootstrap_preds <- bootstrap_preds - colMeans(bootstrap_preds)
  val_residuals <- val_residuals - colMeans(val_residuals)
  
  # Fit the original model to the full training data
  fit <- train(
    y = y_train,
    x = X_train,
    method = model$method,
    tuneGrid = model$bestTune,
    trControl = trainControl(method = "none", savePredictions = FALSE)
  )
  
  preds <- predict(fit, newdata = X_train)
  train_residuals <- y_train - preds
  
  # Calculate various values needed for 0.632+ Bootstrap
  no_information_error <- mean(abs(sample(y_train) - sample(preds)))
  generalization <- abs(colMeans(val_residuals) - mean(train_residuals))
  no_information_val <- abs(no_information_error - train_residuals)
  relative_overfitting_rate <- mean(generalization / no_information_val)
  weight <- 0.632 / (1 - 0.368 * relative_overfitting_rate)
  
  # Calculate prediction residuals
  residuals <- (1 - weight) * train_residuals + weight * colMeans(val_residuals)
  
  # Calculate prediction percentiles
  percentiles <- apply(bootstrap_preds, 1, function(x) {
    quantile(x + residuals, probs = c(alpha / 2, 1 - alpha / 2))
  })
  
  # Create a data frame with predictions, lower, and upper limits
  result <- data.frame(
    fit = predict(fit, newdata = new_data),
    lwr = percentiles[1, ],
    upr = percentiles[2, ]
  )
  
  return(result)
}

我的代码无法重现线性模型的预期预测区间。增加bootstrap重新采样的数量对此没有帮助。你能帮我找到我错在哪里吗?

> calculate_prediction_intervals(fit_caret, new_data)
           fit        lwr       upr
1   1.18302967 -0.2597420 1.1699486
2   2.07894173 -1.4669930 7.0949444
3   0.71611677 -2.1804343 0.4431974
4   1.37767478 -0.6438284 2.5235400
5   1.68312227 -0.9393278 4.4294951
6   1.71845385 -1.0413210 4.8058089
7   0.06639059 -6.7192473 1.1929259
8   0.58836348 -3.2036975 0.7598031
9   1.55414870 -0.7131324 3.5583779
10  0.04536204 -6.8536552 1.2401264
11  1.76387322 -1.0177667 5.0307556
12 -0.01836307 -7.4146538 1.4246235
13  1.29583653 -0.4646119 2.0345750
14  0.18768121 -5.8312821 1.0571434
15  1.33552830 -0.4831878 2.0921489
> predict(fit_simple, newdata =  new_data, interval= "prediction")
           fit        lwr      upr
1   1.18302967 -0.9262779 3.292337
2   2.07894173 -4.5686088 8.726492
3   0.71611677 -2.0877607 3.519994
4   1.37767478 -1.4345098 4.189859
5   1.68312227 -2.6904110 6.056656
6   1.71845385 -2.8512314 6.288139
7   0.06639059 -6.2672902 6.400071
8   0.58836348 -2.8285939 4.005321
9   1.55414870 -2.1238365 5.232134
10  0.04536204 -6.4117391 6.502463
11  1.76387322 -3.0606644 6.588411
12 -0.01836307 -6.8508475 6.814121
13  1.29583653 -1.1747848 3.766458
14  0.18768121 -5.4394392 5.814802
15  1.33552830 -1.2942424 3.965299

我知道我试图复制的方法的替代方案存在,例如,共形推理,甚至只是将原始残差添加到预测中,但我希望在这里有一个特定的应用程序。我所追求的方法通常应该复制https://arxiv.org/abs/2201.11676的方法,类似于使用tidymodels的其他方法,例如https://www.bryanshalloway.com/2021/04/05/simulating-prediction-intervals/和workboots包(https://markjrieke.github.io/workboots/)。
我计划在使用指定的x和y数据训练的插入符号的更复杂的模型(即,许多预测器,而不仅仅是线性模型)上使用此函数。我没有在插入符号中使用公式方法。由于这种复杂性,仅适用于线性模型的方法也不会奏效。

nfs0ujit

nfs0ujit1#

遵循Workboots包中的方法,只需对插入符号对象进行一些调整,我们就可以使用以下代码获得所有引导的预测(添加了校正的残差),给定alpha的预测分位数以及新数据的拟合。
注意:这与最初的Python在公式化方面的工作略有不同,尽管效果相同。

# Function to generate prediction intervals for a caret model using bootstrapping
predict_caret_boots <-
  function(model,
           n = 2000,
           alpha = 0.05,
           new_data) {
    # Extract training data and outcomes from the model
    X_train <- base::subset(model$trainingData, select = -c(.outcome))
    y_train <- as.numeric(model$trainingData$.outcome)
    
    # Initialize a list to store predictions
    preds_list <- list()
    
    # Loop through n bootstrap resamples
    for (i in 1:n) {
      # Create a bootstrap sample
      train_idxs <- sample(length(y_train), replace = TRUE)
      boot_X_train <- X_train[train_idxs, , drop = FALSE]
      boot_y_train <- y_train[train_idxs]
      boot_X_oob <- X_train[-train_idxs, , drop = FALSE]
      boot_y_oob <- y_train[-train_idxs]
      
      # Fit a model on the bootstrap sample
      fit_b <- train(
        y = boot_y_train,
        x = boot_X_train,
        method = model$method,
        tuneGrid = model$bestTune,
        trControl = trainControl(method = "none", savePredictions = FALSE)
      )
      
      # Make predictions on the new data
      preds <- predict(fit_b, newdata = new_data)
      
      # Make predictions on training data
      preds_train <- predict(fit_b, newdata = boot_X_train)
      
      # Make predictions on OOB data
      preds_oob <- predict(fit_b, newdata = boot_X_oob)
      
      # Calculate training residuals
      resids_train <- boot_y_train - preds_train
      resids_train <- resids_train - mean(resids_train)
      
      # Calculate OOB residuals
      resids_oob <- boot_y_oob - preds_oob
      resids_oob <- resids_oob - mean(resids_oob)
      
      # Calculate no-information error rate (rmse_ni) with RMSE as the loss function
      combos <- tidyr::crossing(boot_y_train, preds_train)
      rmse_ni <- caret::RMSE(combos$preds_train, combos$boot_y_train)
      
      # Calculate overfit rate
      rmse_oob <- caret::RMSE(boot_y_oob, preds_oob)
      rmse_train <- caret::RMSE(boot_y_train, preds_train)
      overfit <- (rmse_oob - rmse_train) / (rmse_ni - rmse_train)
      
      # Calculate weight (if overfit = 0, weight = .632 & residual used will just be .632)
      # Use the actual proportion of distinct training/OOB samples, rather than the average of 0.632/0.368
      prop_368 <- length(boot_y_oob) / length(boot_y_train)
      prop_632 <- 1 - prop_368
      weight <- prop_632 / (1 - (prop_368 * overfit))
      
      # Determine residual std.dev based on weight
      sd_oob <- stats::sd(resids_oob)
      sd_train <- stats::sd(resids_train)
      sd_resid <- weight * sd_oob + (1 - weight) * sd_train
      
      # Add residuals to predictions
      preds <- preds + stats::rnorm(length(preds), 0, sd_resid)
      
      # Create a data frame with predictions and add it to the list
      preds_df <- data.frame(fit = preds)
      preds_list[[i]] <- preds_df
    }
    
    # Calculate quantiles for each row of preds_list
    
    preds_list <- data.frame(preds_list)
    
    quantiles <-
      apply(preds_list, 1, function(row)
        quantile(row, probs = c(alpha / 2, 1 - alpha / 2)))
    
    # Get the central fit, too
    fit_new <- predict(model, new_data)
    
    
    result <- list(
      preds = data.frame(preds_list),
      quantiles = t(data.frame(quantiles)),
      fit = data.frame(fit_new)
    )
    
    return(result)
  }

对这个函数做一点调整可以帮助它显式地处理插入符号等预处理选项。但就目前而言,这似乎做的伎俩漂亮!

相关问题