-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhancement of Model Evaluation Metrics #102
Comments
👋 Thank you for raising an issue! We appreciate your effort in helping us improve. Our team will review it shortly. Stay tuned! |
I would like to work on this. DescriptionThe evaluate_model function currently focuses primarily on accuracy and F1-score for classification models, as well as Mean Squared Error (MSE) and R² for regression models. To improve this function, we propose incorporating additional evaluation metrics, including Precision, Recall, and ROC-AUC for classifiers, along with Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), R², and Symmetric Mean Absolute Percentage Error (SMAPE) for regression models. Problem it SolvesThis enhancement ensures that users receive a comprehensive evaluation of model performance, facilitating better-informed decisions regarding model selection, tuning, and interpretation in the context of explainable AI. Proposed SolutionAdd New Metrics: Extend the evaluate_model function to compute and return additional metrics:
Alternatives ConsideredDevelop custom functions for each metric if more flexibility is needed for specific use cases. Additional ContextThis proposal pertains to the main.py file, which is used for evaluating machine learning models with XAI. |
hey @Kajalkansal30 do you want to work on this issue as you've raised it first |
Currently, the evaluate_model function focuses primarily on accuracy and F1-score for classification models, and MSE and R² for regression models. We could enhance this by including additional evaluation metrics like Precision, Recall, and ROC-AUC for classifiers, and MAE (Mean Absolute Error) and Adjusted R² for regression models. This would provide users with a more comprehensive view of model performance.
The text was updated successfully, but these errors were encountered: