e-fold cross-validation: A computing and energy-efficient alternative to k-fold cross-validation with adaptive folds [Proposal]

Introduction K-fold cross-validation is widely regarded as a robust method for model evaluation in machine learning and related fields, including recommender systems. Unlike a simple hold-out split, k-fold cross-validation ensures that each instance in the dataset is used for training and validation. Furthermore, by performing the evaluation process k times Read more…