In model evaluation, what is the primary purpose of a learning curve?
A. To compare different machine learning algorithms
B. To visualize the model's decision boundary
C. To measure the model's prediction accuracy
D. To assess the model's performance over time
Which metric is used to evaluate the performance of a regression model when you want to know how close the predicted values are to the actual values?
A. Precision
B. Recall
C. Mean Absolute Error (MAE)
D. F1 Score
What does the term "underfitting" refer to in the context of model evaluation and validation?
A. The model fits the training data perfectly
B. The model has too few parameters
C. The model generalizes well to new data
D. The model has high bias and low variance
In cross-validation, what is the main drawback of Leave-One-Out Cross-Validation (LOOCV) compared to k-fold cross-validation?
A. LOOCV requires more computational resources
B. LOOCV is prone to overfitting
C. LOOCV may not be representative of the dataset
D. LOOCV is more computationally efficient
What is the primary purpose of a ROC-AUC (Receiver Operating Characteristic - Area Under the Curve) score in model evaluation?
A. To compare different machine learning algorithms
B. To visualize the model's decision boundary
C. To measure the model's prediction accuracy
D. To evaluate the model's performance on imbalanced datasets
Which evaluation metric is typically used for regression models and measures the average squared difference between predicted and actual values?
A. Accuracy
B. Mean Absolute Error (MAE)
C. Root Mean Square Error (RMSE)
D. F1 Score
In model evaluation, what is the primary disadvantage of using the training dataset to assess a model's performance?
A. It can lead to data leakage
B. It results in a biased evaluation
C. It requires more computational resources
D. It simplifies the model's architecture
Which evaluation metric represents the ratio of true negatives to all actual negative instances and is commonly used in binary classification?
A. Accuracy
B. Precision
C. Recall
D. Specificity
What does the term "bias" refer to in model evaluation and validation?
A. The model's flexibility
B. The difference between predicted and actual values
C. The model's ability to generalize to new data
D. The model's ability to fit the training data
What is the primary purpose of a confusion matrix in model evaluation?
A. To compare different machine learning algorithms
B. To visualize the model's decision boundary
C. To measure the model's prediction accuracy
D. To evaluate the performance of a classification model