Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement of Model Evaluation Metrics #102

Open
Kajalkansal30 opened this issue Oct 14, 2024 · 3 comments
Open

Enhancement of Model Evaluation Metrics #102

Kajalkansal30 opened this issue Oct 14, 2024 · 3 comments

Comments

@Kajalkansal30
Copy link

Currently, the evaluate_model function focuses primarily on accuracy and F1-score for classification models, and MSE and R² for regression models. We could enhance this by including additional evaluation metrics like Precision, Recall, and ROC-AUC for classifiers, and MAE (Mean Absolute Error) and Adjusted R² for regression models. This would provide users with a more comprehensive view of model performance.

Copy link

👋 Thank you for raising an issue! We appreciate your effort in helping us improve. Our team will review it shortly. Stay tuned!

@anushka1511
Copy link

I would like to work on this.

Description

The evaluate_model function currently focuses primarily on accuracy and F1-score for classification models, as well as Mean Squared Error (MSE) and R² for regression models. To improve this function, we propose incorporating additional evaluation metrics, including Precision, Recall, and ROC-AUC for classifiers, along with Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), R², and Symmetric Mean Absolute Percentage Error (SMAPE) for regression models.

Problem it Solves

This enhancement ensures that users receive a comprehensive evaluation of model performance, facilitating better-informed decisions regarding model selection, tuning, and interpretation in the context of explainable AI.

Proposed Solution

Add New Metrics: Extend the evaluate_model function to compute and return additional metrics:

  • For classification: Precision, Recall, ROC-AUC.
  • For regression: MAE, RMSE, R², SMAPE.

Alternatives Considered

Develop custom functions for each metric if more flexibility is needed for specific use cases.

Additional Context

This proposal pertains to the main.py file, which is used for evaluating machine learning models with XAI.

@ombhojane
Copy link
Owner

hey @Kajalkansal30 do you want to work on this issue as you've raised it first

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants