Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH - Adds support for L1 + L2 regularization in SparseLogisticRegression #278

Merged
merged 3 commits into from
Nov 5, 2024

Conversation

AnavAgrawal
Copy link
Contributor

@AnavAgrawal AnavAgrawal commented Nov 2, 2024

Context of the PR

Closes #231

Contributions of the PR

Adds ElasticNet like L1 + L2 regularization support for SparseLogisticRegression.

Checks before merging PR

  • added documentation for any new feature
  • added unit tests
  • edited the what's new (if applicable)

@mathurinm
Copy link
Collaborator

Thanks @AnavAgrawal, I'll merge when it's green

Small comment, you sent the PR from your main branch:
image

this means that your main and this repo's main have now diverged, which will be a pain for future PRs. See a summary on good practices and howtos when working with PRs here: https://github.com/mathurinm/github-assignment/?tab=readme-ov-file#summary-how-to-contribute-to-an-existing-repository

@mathurinm mathurinm merged commit 1225970 into scikit-learn-contrib:main Nov 5, 2024
4 checks passed
@AnavAgrawal
Copy link
Contributor Author

Oh, I see the issue. Thanks for the resource, @mathurinm! I'll keep this in mind for next time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ENH add support for L1 + L2 regularization in SparseLogisticRegression
2 participants