-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DAOS-16927 pool: Store server handles in RDBs #15846
base: master
Are you sure you want to change the base?
Conversation
Ticket title is 'Special pool and container handles should be stored persistently in RDBs' |
353983c
to
70acfc4
Compare
Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/345/log |
Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/340/log |
Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/346/log |
Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/337/log |
See the Jira ticket for the background. In the upgrade case, we try to use the existing volatile server handles generated by the old method, so that hopefully problems will be less likely to occur. Features: pool Signed-off-by: Li Wei <[email protected]> Required-githooks: true
70acfc4
to
075a037
Compare
Previous Ubuntu build failed due to unable to resolve dependencies; rebase and try again. |
Due to extremely slow CI responses, I'm requesting reviews before CI results come in. |
uuid_copy(pool_hdl_uuid, svc->ps_pool->sp_srv_pool_hdl); | ||
uuid_copy(cont_hdl_uuid, svc->ps_pool->sp_srv_cont_hdl); | ||
if (svc->ps_global_version >= DAOS_POOL_GLOBAL_VERSION_WITH_SRV_HDLS) { | ||
/* See the is_pool_from_srv comment in the "else" branch. */ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm bit worried about this "delay the sp_srv_cont_hdl initialization in ds_pool_iv_refresh_hdl()", what if an IV or whatever request reached leader and called ds_cont_hdl_rdb_lookup() before the sp_srv_cont_hdl is initialized?
I also don't quite see why we need to compare the request handle with this global sp_srv_cont_hdl in ds_cont_hdl_rdb_lookup(), this function is only called by cont_iv_ent_fetch() which is for IV_CONT_CAPA, probably because we mistakenly regard global handle as a normal open handle when fetching IV_CONT_CAPA? (see cont_iv_hdl_fetch(), it's called by common request handler and doesn't distinguish global handle from normal handle).
Clearing the global handles in pool_iv_ent_invalid() looks not safe to me too, it looks to me this IV invalidate function could be called when IV request failed? Given that global handles are immutable now, I'm wondering if we still need to invalidate them?
Just raise some concerns here, they could be addressed in a separate ticket.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NiuYawei, agreed. When writing this patch, this IV code was the most difficult part to investigate. I tried to 1) maintain the current algorithm for the "2.8 code hosting 2.6 pool" case, and 2) avoid making big changes to code I don't confidently understand.
One change I'm confident about is that we need a new IV operation semantics: Update/invalidate an IV on the local node always synchronously, while update/invalidate it synchronously or asynchronously based on flags. Then, this trick here would no longer be necessary. Such semantics would also benefit other cases on a PS leader.
In 3.0, we will be able to drop the current algorithm because 2.6 pools will no longer be supported. We can gradually remove the complexities during (or even before) the 3.0 development.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If practical, it may be worth exercising this patch with the full regression tests in server/metadata.py since there is some checking against a specific number of resources (daos containers) being created before rdb gets an out of space error. Maybe Features: metadata? Might need to consider running that in a separate debug pr rather than changing this one though. Or (if it's even manageable with the latest code), running the server/metadata.py tests on a developer system manually with launch.py.
See the Jira ticket for the background. In the upgrade case, we try to use the existing volatile server handles generated by the old method, so that hopefully problems will be less likely to occur.
Features: pool
Before requesting gatekeeper:
Features:
(orTest-tag*
) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.Gatekeeper: