-
Notifications
You must be signed in to change notification settings - Fork 415
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changelog for 0.9.5 release #2143
Conversation
@esantorella has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
#### New features | ||
|
||
Hypervolume Knowledge Gradient (HVKG): | ||
* Add `qHypervolumeKnowledgeGradient`, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this sentence a bit hard to follow but looks like this is also what is in the docstring. Probably ok to leave as is.
typo fix from review Co-authored-by: Sait Cakmak <[email protected]>
@esantorella has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
* Add `qHypervolumeKnowledgeGradient`, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950). | ||
* Add initializer for one-shot HVKG (#1982). | ||
* Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094). | ||
* Illustrate how to use Multi-Fidelity HVKG (MV-HVKG) (#2101). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels like rather than having a line item for each PR, can we logically group the PRs together?
* Add `qHypervolumeKnowledgeGradient`, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950). | |
* Add initializer for one-shot HVKG (#1982). | |
* Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094). | |
* Illustrate how to use Multi-Fidelity HVKG (MV-HVKG) (#2101). | |
* Add `qHypervolumeKnowledgeGradient`, which seeks to maximize the difference in hypervolume of the hypervolume-maximizing set of a fixed size after conditioning the unknown observation(s) that would be received if X were evaluated (#1950, #1982, #2101). | |
* Add tutorial on decoupled Multi-Objective Bayesian Optimization (MOBO) with HVKG (#2094). |
@esantorella merged this pull request in ecf9ac1. |
Motivation
Changelog for 0.9.5 release
Test Plan