Package org.mymedialite.programs

Class Summary
ItemRecommendation Item prediction and evaluation program using positive-only feedback Usage: item_recommendation --training-file=FILE --recommender=METHOD [OPTIONS] method ARGUMENTS have the form name=value General OPTIONS: --recommender=METHOD use METHOD for recommendations (default: MostPopular) --group-recommender=METHOD use METHOD to combine the predictions for several users --recommender-options=OPTIONS use OPTIONS as recommender options --help display this usage information and exit --version display version information and exit --random-seed=N initialize the random number generator with N Files: --training-file=FILE read training data from FILE --test-file=FILE read test data from FILE --file-format=ignore_first_line|default --data-dir=DIR load all files from DIR --user-attributes=FILE file containing user attribute information, 1 tuple per line --item-attributes=FILE file containing item attribute information, 1 tuple per line --user-relations=FILE file containing user relation information, 1 tuple per line --item-relations=FILE file containing item relation information, 1 tuple per line --user-groups=FILE file containing group-to-user mappings, 1 tuple per line --save-model=FILE save computed model to FILE --load-model=FILE load model from FILE Data interpretation: --user-prediction transpose the user-item matrix and perform user prediction instead of item prediction --rating-threshold=NUM (for rating datasets) interpret rating >= NUM as positive feedback Choosing the items for evaluation/prediction (mutually exclusive): --candidate-items=FILE use the items in FILE (one per line) as candidate items in the evaluation --overlap-items use only the items that are both in the training and the test set as candidate items in the evaluation --in-training-items use only the items in the training set as candidate items in the evaluation --in-test-items use only the items in the test set as candidate items in the evaluation --all-items use all known items as candidate items in the evaluation Choosing the users for evaluation/prediction --test-users=FILE predict items for users specified in FILE (one user per line) Prediction options: --prediction-file=FILE write ranked predictions to FILE, one user per line --predict-items-number=N predict N items per user (needs --predict-items-file) Evaluation options: --cross-validation=K perform k-fold cross-validation on the training data --show-fold-results show results for individual folds in cross-validation --test-ratio=NUM evaluate by splitting of a NUM part of the feedback --num-test-users=N evaluate on only N randomly picked users (to save time) --online-evaluation perform online evaluation (use every tested user-item combination for incremental training) --filtered-evaluation perform evaluation filtered by item attribute (expects --item-attributes=FILE) --repeat-evaluation assume that items can be accessed repeatedly - items can occur both in the training and the test data for one user --compute-fit display fit on training data Finding the right number of iterations (iterative methods) --find-iter=N give out statistics every N iterations --max-iter=N perform at most N iterations --auc-cutoff=NUM abort if AUC is below NUM --prec5-cutoff=NUM abort if prec@5 is below NUM
RatingPrediction Rating prediction program, see usage() method for more information.