Skip to content

Commit

Permalink
Merge pull request #17 from UBC-MDS/susannah_expand_README
Browse files Browse the repository at this point in the history
README: added contributers, package and function explanations and not…
  • Loading branch information
musiccabin authored Jan 12, 2025
2 parents 8ff709d + 952c06a commit 8fc4533
Showing 1 changed file with 21 additions and 1 deletion.
22 changes: 21 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,22 @@
# compare_classifiers

Compare metrics such as f1 score and confusion matrices for your machine learning models and through voting or stacking them, then predict on test data with your choice of voting or stacking.
Compare metrics such as f1 score and confusion matrices for your machine learning models and through voting or stacking them, then predict on test data with your choice of voting or stacking!

This package is helpful when you are deciding whether to use a single Classifier or combine multiple well-performing Classifiers through an ensemble using Voting or Stacking to yield a more stable and trustworthy classification result. Each of the four functions serves a unique purpose:

`confusion_matrices`: provides confusion matrices side-by-side for all Classifiers to compare their performances.

`compare_f1`: provides a Pandas data frame, each row listing model fit time, and training and test scores for each Classifier.

`ensemble_compare_f1`: provides a Pandas data frame containing model fit time, training and test scores for both Voting and Stacking ensembles, with each ensemble in its own row.

`ensemble_predict`: provides classification predictions via Voting or Stacking multiple Classifiers.

Before using `ensemble_predict` on test or unseen data, we recommend that you run each of the three other functions on training data to examine how Classifiers perform individually on their own, and the ensemble performances of Voting against Stacking to make a well-informed decision. Sometimes, an individual Classifier could generate a better controlled machine learning environment if its performance rivals that of an ensemble.

## Contributors

Ke Gao, Bryan Lee, Susannah Sun, Wangkai Zhu

## Installation

Expand Down Expand Up @@ -48,6 +64,10 @@ from compare_classifiers.ensemble_predict import ensemble_predict
ensemble_predict(estimators, X, y, ensemble_method, unseen_data, 'voting')
```

## Similar Packages

We are not aware of similar packages existing. Though there are available functions to present metrics for a single model and a single ensemble, we have not found functions that compare and display metrics and results for multiple models or ensembles all at once. Neither is there a function that predicts based on dynamic input of ensemble method.

## Contributing

Interested in contributing? Check out the contributing guidelines. Please note that this project is released with a Code of Conduct. By contributing to this project, you agree to abide by its terms.
Expand Down

0 comments on commit 8fc4533

Please sign in to comment.