Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DAOS-16927 pool: Store server handles in RDBs #15846

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open

Conversation

liw
Copy link
Contributor

@liw liw commented Feb 5, 2025

See the Jira ticket for the background. In the upgrade case, we try to use the existing volatile server handles generated by the old method, so that hopefully problems will be less likely to occur.

Features: pool

Before requesting gatekeeper:

  • Two review approvals and any prior change requests have been resolved.
  • Testing is complete and all tests passed or there is a reason documented in the PR why it should be force landed and forced-landing tag is set.
  • Features: (or Test-tag*) commit pragma was used or there is a reason documented that there are no appropriate tags for this PR.
  • Commit messages follows the guidelines outlined here.
  • Any tests skipped by the ticket being addressed have been run and passed in the PR.

Gatekeeper:

  • You are the appropriate gatekeeper to be landing the patch.
  • The PR has 2 reviews by people familiar with the code, including appropriate owners.
  • Githooks were used. If not, request that user install them and check copyright dates.
  • Checkpatch issues are resolved. Pay particular attention to ones that will show up on future PRs.
  • All builds have passed. Check non-required builds for any new compiler warnings.
  • Sufficient testing is done. Check feature pragmas and test tags and that tests skipped for the ticket are run and now pass with the changes.
  • If applicable, the PR has addressed any potential version compatibility issues.
  • Check the target branch. If it is master branch, should the PR go to a feature branch? If it is a release branch, does it have merge approval in the JIRA ticket.
  • Extra checks if forced landing is requested
    • Review comments are sufficiently resolved, particularly by prior reviewers that requested changes.
    • No new NLT or valgrind warnings. Check the classic view.
    • Quick-build or Quick-functional is not used.
  • Fix the commit message upon landing. Check the standard here. Edit it to create a single commit. If necessary, ask submitter for a new summary.

Copy link

github-actions bot commented Feb 5, 2025

Ticket title is 'Special pool and container handles should be stored persistently in RDBs'
Status is 'In Progress'
https://daosio.atlassian.net/browse/DAOS-16927

@liw liw force-pushed the liw/special-hdls branch from 76f8439 to ea42213 Compare February 5, 2025 23:42
src/pool/srv_pool.c Outdated Show resolved Hide resolved
@liw liw force-pushed the liw/special-hdls branch 2 times, most recently from 353983c to 70acfc4 Compare February 12, 2025 00:26
@daosbuild1
Copy link
Collaborator

Test stage Build DEB on Ubuntu 20.04 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/345/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 9 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/340/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on EL 8 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/346/log

@daosbuild1
Copy link
Collaborator

Test stage Build RPM on Leap 15.5 completed with status FAILURE. https://build.hpdd.intel.com//job/daos-stack/job/daos/view/change-requests/job/PR-15846/4/execution/node/337/log

See the Jira ticket for the background. In the upgrade case, we try to
use the existing volatile server handles generated by the old method,
so that hopefully problems will be less likely to occur.

Features: pool
Signed-off-by: Li Wei <[email protected]>
Required-githooks: true
@liw liw force-pushed the liw/special-hdls branch from 70acfc4 to 075a037 Compare February 12, 2025 02:00
@liw
Copy link
Contributor Author

liw commented Feb 12, 2025

Previous Ubuntu build failed due to unable to resolve dependencies; rebase and try again.

@liw liw marked this pull request as ready for review February 12, 2025 02:18
@liw liw requested review from a team as code owners February 12, 2025 02:18
@liw
Copy link
Contributor Author

liw commented Feb 12, 2025

Due to extremely slow CI responses, I'm requesting reviews before CI results come in.

@liw liw requested review from liuxuezhao and NiuYawei February 12, 2025 02:19
uuid_copy(pool_hdl_uuid, svc->ps_pool->sp_srv_pool_hdl);
uuid_copy(cont_hdl_uuid, svc->ps_pool->sp_srv_cont_hdl);
if (svc->ps_global_version >= DAOS_POOL_GLOBAL_VERSION_WITH_SRV_HDLS) {
/* See the is_pool_from_srv comment in the "else" branch. */
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm bit worried about this "delay the sp_srv_cont_hdl initialization in ds_pool_iv_refresh_hdl()", what if an IV or whatever request reached leader and called ds_cont_hdl_rdb_lookup() before the sp_srv_cont_hdl is initialized?

I also don't quite see why we need to compare the request handle with this global sp_srv_cont_hdl in ds_cont_hdl_rdb_lookup(), this function is only called by cont_iv_ent_fetch() which is for IV_CONT_CAPA, probably because we mistakenly regard global handle as a normal open handle when fetching IV_CONT_CAPA? (see cont_iv_hdl_fetch(), it's called by common request handler and doesn't distinguish global handle from normal handle).

Clearing the global handles in pool_iv_ent_invalid() looks not safe to me too, it looks to me this IV invalidate function could be called when IV request failed? Given that global handles are immutable now, I'm wondering if we still need to invalidate them?

Just raise some concerns here, they could be addressed in a separate ticket.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NiuYawei, agreed. When writing this patch, this IV code was the most difficult part to investigate. I tried to 1) maintain the current algorithm for the "2.8 code hosting 2.6 pool" case, and 2) avoid making big changes to code I don't confidently understand.

One change I'm confident about is that we need a new IV operation semantics: Update/invalidate an IV on the local node always synchronously, while update/invalidate it synchronously or asynchronously based on flags. Then, this trick here would no longer be necessary. Such semantics would also benefit other cases on a PS leader.

In 3.0, we will be able to drop the current algorithm because 2.6 pools will no longer be supported. We can gradually remove the complexities during (or even before) the 3.0 development.

Copy link
Contributor

@kccain kccain left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If practical, it may be worth exercising this patch with the full regression tests in server/metadata.py since there is some checking against a specific number of resources (daos containers) being created before rdb gets an out of space error. Maybe Features: metadata? Might need to consider running that in a separate debug pr rather than changing this one though. Or (if it's even manageable with the latest code), running the server/metadata.py tests on a developer system manually with launch.py.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants